00:00:00.001 Started by upstream project "autotest-per-patch" build number 124254 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.025 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.041 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.061 Using shallow fetch with depth 1 00:00:00.061 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.061 > git --version # timeout=10 00:00:00.089 > git --version # 'git version 2.39.2' 00:00:00.089 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.131 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.131 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.897 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.909 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.920 Checking out Revision bdda68d1e41499f94b336830106e36e3602574f3 (FETCH_HEAD) 00:00:02.920 > git config core.sparsecheckout # timeout=10 00:00:02.929 > git read-tree -mu HEAD # timeout=10 00:00:02.944 > git checkout -f bdda68d1e41499f94b336830106e36e3602574f3 # timeout=5 00:00:02.961 Commit message: "jenkins/jjb-config: Make sure proxies are set for pkgdep jobs" 00:00:02.961 > git rev-list --no-walk bdda68d1e41499f94b336830106e36e3602574f3 # timeout=10 00:00:03.037 [Pipeline] Start of Pipeline 00:00:03.054 [Pipeline] library 00:00:03.056 Loading library shm_lib@master 00:00:03.056 Library shm_lib@master is cached. Copying from home. 00:00:03.073 [Pipeline] node 00:00:18.075 Still waiting to schedule task 00:00:18.075 Waiting for next available executor on ‘DiskNvme&&DevQAT’ 00:02:44.510 Running on WFP20 in /var/jenkins/workspace/crypto-phy-autotest 00:02:44.513 [Pipeline] { 00:02:44.529 [Pipeline] catchError 00:02:44.531 [Pipeline] { 00:02:44.548 [Pipeline] wrap 00:02:44.563 [Pipeline] { 00:02:44.572 [Pipeline] stage 00:02:44.573 [Pipeline] { (Prologue) 00:02:44.782 [Pipeline] sh 00:02:45.061 + logger -p user.info -t JENKINS-CI 00:02:45.082 [Pipeline] echo 00:02:45.084 Node: WFP20 00:02:45.094 [Pipeline] sh 00:02:45.389 [Pipeline] setCustomBuildProperty 00:02:45.402 [Pipeline] echo 00:02:45.404 Cleanup processes 00:02:45.410 [Pipeline] sh 00:02:45.690 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:45.690 1432286 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:45.703 [Pipeline] sh 00:02:45.979 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:45.979 ++ grep -v 'sudo pgrep' 00:02:45.979 ++ awk '{print $1}' 00:02:45.979 + sudo kill -9 00:02:45.979 + true 00:02:45.994 [Pipeline] cleanWs 00:02:46.004 [WS-CLEANUP] Deleting project workspace... 00:02:46.004 [WS-CLEANUP] Deferred wipeout is used... 00:02:46.010 [WS-CLEANUP] done 00:02:46.015 [Pipeline] setCustomBuildProperty 00:02:46.029 [Pipeline] sh 00:02:46.307 + sudo git config --global --replace-all safe.directory '*' 00:02:46.381 [Pipeline] nodesByLabel 00:02:46.383 Found a total of 2 nodes with the 'sorcerer' label 00:02:46.391 [Pipeline] httpRequest 00:02:46.395 HttpMethod: GET 00:02:46.395 URL: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:02:46.396 Sending request to url: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:02:46.399 Response Code: HTTP/1.1 200 OK 00:02:46.400 Success: Status code 200 is in the accepted range: 200,404 00:02:46.400 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:02:46.543 [Pipeline] sh 00:02:46.821 + tar --no-same-owner -xf jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:02:46.836 [Pipeline] httpRequest 00:02:46.840 HttpMethod: GET 00:02:46.841 URL: http://10.211.164.101/packages/spdk_5456a66b7eaa2e12f524eb5dffa53bad87ea9680.tar.gz 00:02:46.841 Sending request to url: http://10.211.164.101/packages/spdk_5456a66b7eaa2e12f524eb5dffa53bad87ea9680.tar.gz 00:02:46.842 Response Code: HTTP/1.1 200 OK 00:02:46.842 Success: Status code 200 is in the accepted range: 200,404 00:02:46.843 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_5456a66b7eaa2e12f524eb5dffa53bad87ea9680.tar.gz 00:02:49.010 [Pipeline] sh 00:02:49.292 + tar --no-same-owner -xf spdk_5456a66b7eaa2e12f524eb5dffa53bad87ea9680.tar.gz 00:02:52.593 [Pipeline] sh 00:02:52.902 + git -C spdk log --oneline -n5 00:02:52.902 5456a66b7 nvmf/tcp: Add support for the interrupt mode in NVMe-of TCP 00:02:52.902 6432dbc62 nvmf/tcp: move await_req handling to nvmf_tcp_req_put() 00:02:52.902 495943646 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:02:52.902 2369a5119 nvmf: fail early when interrupt mode is not supported 00:02:52.902 e08210f7f nvmf/tcp: replace pending_buf_queue with iobuf callbacks 00:02:52.914 [Pipeline] } 00:02:52.934 [Pipeline] // stage 00:02:52.944 [Pipeline] stage 00:02:52.947 [Pipeline] { (Prepare) 00:02:52.967 [Pipeline] writeFile 00:02:52.985 [Pipeline] sh 00:02:53.269 + logger -p user.info -t JENKINS-CI 00:02:53.282 [Pipeline] sh 00:02:53.565 + logger -p user.info -t JENKINS-CI 00:02:53.577 [Pipeline] sh 00:02:53.861 + cat autorun-spdk.conf 00:02:53.861 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.861 SPDK_TEST_BLOCKDEV=1 00:02:53.861 SPDK_TEST_ISAL=1 00:02:53.861 SPDK_TEST_CRYPTO=1 00:02:53.861 SPDK_TEST_REDUCE=1 00:02:53.861 SPDK_TEST_VBDEV_COMPRESS=1 00:02:53.861 SPDK_RUN_UBSAN=1 00:02:53.869 RUN_NIGHTLY=0 00:02:53.875 [Pipeline] readFile 00:02:53.902 [Pipeline] withEnv 00:02:53.904 [Pipeline] { 00:02:53.920 [Pipeline] sh 00:02:54.205 + set -ex 00:02:54.205 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:02:54.205 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:02:54.205 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:54.205 ++ SPDK_TEST_BLOCKDEV=1 00:02:54.205 ++ SPDK_TEST_ISAL=1 00:02:54.205 ++ SPDK_TEST_CRYPTO=1 00:02:54.205 ++ SPDK_TEST_REDUCE=1 00:02:54.205 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:02:54.205 ++ SPDK_RUN_UBSAN=1 00:02:54.205 ++ RUN_NIGHTLY=0 00:02:54.205 + case $SPDK_TEST_NVMF_NICS in 00:02:54.205 + DRIVERS= 00:02:54.205 + [[ -n '' ]] 00:02:54.205 + exit 0 00:02:54.214 [Pipeline] } 00:02:54.234 [Pipeline] // withEnv 00:02:54.240 [Pipeline] } 00:02:54.260 [Pipeline] // stage 00:02:54.270 [Pipeline] catchError 00:02:54.272 [Pipeline] { 00:02:54.288 [Pipeline] timeout 00:02:54.288 Timeout set to expire in 40 min 00:02:54.290 [Pipeline] { 00:02:54.307 [Pipeline] stage 00:02:54.309 [Pipeline] { (Tests) 00:02:54.326 [Pipeline] sh 00:02:54.610 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:02:54.610 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:02:54.610 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:02:54.611 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:02:54.611 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:54.611 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:02:54.611 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:02:54.611 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:02:54.611 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:02:54.611 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:02:54.611 + [[ crypto-phy-autotest == pkgdep-* ]] 00:02:54.611 + cd /var/jenkins/workspace/crypto-phy-autotest 00:02:54.611 + source /etc/os-release 00:02:54.611 ++ NAME='Fedora Linux' 00:02:54.611 ++ VERSION='38 (Cloud Edition)' 00:02:54.611 ++ ID=fedora 00:02:54.611 ++ VERSION_ID=38 00:02:54.611 ++ VERSION_CODENAME= 00:02:54.611 ++ PLATFORM_ID=platform:f38 00:02:54.611 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:54.611 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:54.611 ++ LOGO=fedora-logo-icon 00:02:54.611 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:54.611 ++ HOME_URL=https://fedoraproject.org/ 00:02:54.611 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:54.611 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:54.611 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:54.611 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:54.611 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:54.611 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:54.611 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:54.611 ++ SUPPORT_END=2024-05-14 00:02:54.611 ++ VARIANT='Cloud Edition' 00:02:54.611 ++ VARIANT_ID=cloud 00:02:54.611 + uname -a 00:02:54.611 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:54.611 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:02:57.901 Hugepages 00:02:57.901 node hugesize free / total 00:02:58.161 node0 1048576kB 0 / 0 00:02:58.161 node0 2048kB 0 / 0 00:02:58.161 node1 1048576kB 0 / 0 00:02:58.161 node1 2048kB 0 / 0 00:02:58.161 00:02:58.161 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.161 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:58.161 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:58.161 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:58.161 + rm -f /tmp/spdk-ld-path 00:02:58.161 + source autorun-spdk.conf 00:02:58.161 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.161 ++ SPDK_TEST_BLOCKDEV=1 00:02:58.161 ++ SPDK_TEST_ISAL=1 00:02:58.161 ++ SPDK_TEST_CRYPTO=1 00:02:58.161 ++ SPDK_TEST_REDUCE=1 00:02:58.161 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:02:58.161 ++ SPDK_RUN_UBSAN=1 00:02:58.161 ++ RUN_NIGHTLY=0 00:02:58.161 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:58.161 + [[ -n '' ]] 00:02:58.161 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:58.161 + for M in /var/spdk/build-*-manifest.txt 00:02:58.161 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:58.161 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:02:58.161 + for M in /var/spdk/build-*-manifest.txt 00:02:58.161 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:58.161 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:02:58.161 ++ uname 00:02:58.161 + [[ Linux == \L\i\n\u\x ]] 00:02:58.161 + sudo dmesg -T 00:02:58.420 + sudo dmesg --clear 00:02:58.420 + dmesg_pid=1433341 00:02:58.420 + [[ Fedora Linux == FreeBSD ]] 00:02:58.420 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.420 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.420 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:58.420 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:58.420 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:58.420 + [[ -x /usr/src/fio-static/fio ]] 00:02:58.420 + export FIO_BIN=/usr/src/fio-static/fio 00:02:58.420 + FIO_BIN=/usr/src/fio-static/fio 00:02:58.420 + sudo dmesg -Tw 00:02:58.420 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:58.420 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:58.420 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:58.420 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.420 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.420 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:58.420 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.420 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.420 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:02:58.420 Test configuration: 00:02:58.420 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.420 SPDK_TEST_BLOCKDEV=1 00:02:58.420 SPDK_TEST_ISAL=1 00:02:58.420 SPDK_TEST_CRYPTO=1 00:02:58.420 SPDK_TEST_REDUCE=1 00:02:58.420 SPDK_TEST_VBDEV_COMPRESS=1 00:02:58.420 SPDK_RUN_UBSAN=1 00:02:58.420 RUN_NIGHTLY=0 18:47:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:58.420 18:47:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.420 18:47:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.420 18:47:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.420 18:47:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.420 18:47:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.420 18:47:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.420 18:47:13 -- paths/export.sh@5 -- $ export PATH 00:02:58.420 18:47:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.420 18:47:13 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:58.420 18:47:13 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:58.420 18:47:13 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718038033.XXXXXX 00:02:58.420 18:47:13 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718038033.5hgxtf 00:02:58.420 18:47:13 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:58.420 18:47:13 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:58.420 18:47:13 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:02:58.420 18:47:13 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:58.420 18:47:13 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.420 18:47:13 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:58.420 18:47:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:58.420 18:47:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.420 18:47:13 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:02:58.420 18:47:13 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:58.420 18:47:13 -- pm/common@17 -- $ local monitor 00:02:58.420 18:47:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.420 18:47:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.420 18:47:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.420 18:47:13 -- pm/common@21 -- $ date +%s 00:02:58.420 18:47:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.420 18:47:13 -- pm/common@21 -- $ date +%s 00:02:58.420 18:47:13 -- pm/common@25 -- $ sleep 1 00:02:58.420 18:47:13 -- pm/common@21 -- $ date +%s 00:02:58.420 18:47:13 -- pm/common@21 -- $ date +%s 00:02:58.420 18:47:13 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718038033 00:02:58.420 18:47:13 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718038033 00:02:58.420 18:47:13 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718038033 00:02:58.420 18:47:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718038033 00:02:58.679 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718038033_collect-cpu-temp.pm.log 00:02:58.679 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718038033_collect-vmstat.pm.log 00:02:58.679 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718038033_collect-cpu-load.pm.log 00:02:58.679 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718038033_collect-bmc-pm.bmc.pm.log 00:02:59.616 18:47:14 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:59.616 18:47:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.616 18:47:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.616 18:47:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:59.616 18:47:14 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.616 Mon Jun 10 04:47:14 PM UTC 2024 00:02:59.616 18:47:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.616 v24.09-pre-70-g5456a66b7 00:02:59.616 18:47:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:59.616 18:47:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.616 18:47:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.616 18:47:14 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:59.616 18:47:14 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:59.616 18:47:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.616 ************************************ 00:02:59.616 START TEST ubsan 00:02:59.616 ************************************ 00:02:59.616 18:47:14 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:59.616 using ubsan 00:02:59.616 00:02:59.616 real 0m0.001s 00:02:59.616 user 0m0.001s 00:02:59.616 sys 0m0.000s 00:02:59.616 18:47:14 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:59.616 18:47:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.616 ************************************ 00:02:59.616 END TEST ubsan 00:02:59.616 ************************************ 00:02:59.616 18:47:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.616 18:47:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.616 18:47:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.616 18:47:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.616 18:47:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.616 18:47:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.616 18:47:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.616 18:47:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.616 18:47:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:59.616 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:02:59.616 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:03:00.184 Using 'verbs' RDMA provider 00:03:16.444 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:31.330 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:31.330 Creating mk/config.mk...done. 00:03:31.330 Creating mk/cc.flags.mk...done. 00:03:31.330 Type 'make' to build. 00:03:31.330 18:47:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:03:31.330 18:47:44 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:03:31.330 18:47:44 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:03:31.330 18:47:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.330 ************************************ 00:03:31.330 START TEST make 00:03:31.330 ************************************ 00:03:31.330 18:47:44 make -- common/autotest_common.sh@1124 -- $ make -j112 00:03:31.330 make[1]: Nothing to be done for 'all'. 00:04:03.476 The Meson build system 00:04:03.476 Version: 1.3.1 00:04:03.476 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:04:03.476 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:04:03.476 Build type: native build 00:04:03.476 Program cat found: YES (/usr/bin/cat) 00:04:03.476 Project name: DPDK 00:04:03.476 Project version: 24.03.0 00:04:03.476 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:03.476 C linker for the host machine: cc ld.bfd 2.39-16 00:04:03.476 Host machine cpu family: x86_64 00:04:03.476 Host machine cpu: x86_64 00:04:03.476 Message: ## Building in Developer Mode ## 00:04:03.476 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:03.476 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:03.476 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:03.476 Program python3 found: YES (/usr/bin/python3) 00:04:03.476 Program cat found: YES (/usr/bin/cat) 00:04:03.476 Compiler for C supports arguments -march=native: YES 00:04:03.476 Checking for size of "void *" : 8 00:04:03.476 Checking for size of "void *" : 8 (cached) 00:04:03.476 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:03.476 Library m found: YES 00:04:03.476 Library numa found: YES 00:04:03.476 Has header "numaif.h" : YES 00:04:03.476 Library fdt found: NO 00:04:03.476 Library execinfo found: NO 00:04:03.476 Has header "execinfo.h" : YES 00:04:03.476 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:03.476 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:03.476 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:03.476 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:03.476 Run-time dependency openssl found: YES 3.0.9 00:04:03.476 Run-time dependency libpcap found: YES 1.10.4 00:04:03.476 Has header "pcap.h" with dependency libpcap: YES 00:04:03.476 Compiler for C supports arguments -Wcast-qual: YES 00:04:03.476 Compiler for C supports arguments -Wdeprecated: YES 00:04:03.476 Compiler for C supports arguments -Wformat: YES 00:04:03.476 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:03.476 Compiler for C supports arguments -Wformat-security: NO 00:04:03.476 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:03.476 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:03.476 Compiler for C supports arguments -Wnested-externs: YES 00:04:03.476 Compiler for C supports arguments -Wold-style-definition: YES 00:04:03.476 Compiler for C supports arguments -Wpointer-arith: YES 00:04:03.476 Compiler for C supports arguments -Wsign-compare: YES 00:04:03.476 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:03.476 Compiler for C supports arguments -Wundef: YES 00:04:03.476 Compiler for C supports arguments -Wwrite-strings: YES 00:04:03.476 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:03.476 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:03.476 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:03.476 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:03.476 Program objdump found: YES (/usr/bin/objdump) 00:04:03.476 Compiler for C supports arguments -mavx512f: YES 00:04:03.476 Checking if "AVX512 checking" compiles: YES 00:04:03.476 Fetching value of define "__SSE4_2__" : 1 00:04:03.476 Fetching value of define "__AES__" : 1 00:04:03.476 Fetching value of define "__AVX__" : 1 00:04:03.476 Fetching value of define "__AVX2__" : 1 00:04:03.476 Fetching value of define "__AVX512BW__" : 1 00:04:03.476 Fetching value of define "__AVX512CD__" : 1 00:04:03.476 Fetching value of define "__AVX512DQ__" : 1 00:04:03.476 Fetching value of define "__AVX512F__" : 1 00:04:03.476 Fetching value of define "__AVX512VL__" : 1 00:04:03.476 Fetching value of define "__PCLMUL__" : 1 00:04:03.476 Fetching value of define "__RDRND__" : 1 00:04:03.476 Fetching value of define "__RDSEED__" : 1 00:04:03.476 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:03.476 Fetching value of define "__znver1__" : (undefined) 00:04:03.476 Fetching value of define "__znver2__" : (undefined) 00:04:03.476 Fetching value of define "__znver3__" : (undefined) 00:04:03.476 Fetching value of define "__znver4__" : (undefined) 00:04:03.476 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:03.476 Message: lib/log: Defining dependency "log" 00:04:03.476 Message: lib/kvargs: Defining dependency "kvargs" 00:04:03.476 Message: lib/telemetry: Defining dependency "telemetry" 00:04:03.476 Checking for function "getentropy" : NO 00:04:03.476 Message: lib/eal: Defining dependency "eal" 00:04:03.476 Message: lib/ring: Defining dependency "ring" 00:04:03.476 Message: lib/rcu: Defining dependency "rcu" 00:04:03.476 Message: lib/mempool: Defining dependency "mempool" 00:04:03.476 Message: lib/mbuf: Defining dependency "mbuf" 00:04:03.476 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:03.476 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:03.476 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:03.476 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:03.476 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:03.476 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:03.476 Compiler for C supports arguments -mpclmul: YES 00:04:03.476 Compiler for C supports arguments -maes: YES 00:04:03.476 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:03.476 Compiler for C supports arguments -mavx512bw: YES 00:04:03.476 Compiler for C supports arguments -mavx512dq: YES 00:04:03.476 Compiler for C supports arguments -mavx512vl: YES 00:04:03.476 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:03.476 Compiler for C supports arguments -mavx2: YES 00:04:03.476 Compiler for C supports arguments -mavx: YES 00:04:03.476 Message: lib/net: Defining dependency "net" 00:04:03.476 Message: lib/meter: Defining dependency "meter" 00:04:03.476 Message: lib/ethdev: Defining dependency "ethdev" 00:04:03.476 Message: lib/pci: Defining dependency "pci" 00:04:03.476 Message: lib/cmdline: Defining dependency "cmdline" 00:04:03.476 Message: lib/hash: Defining dependency "hash" 00:04:03.476 Message: lib/timer: Defining dependency "timer" 00:04:03.476 Message: lib/compressdev: Defining dependency "compressdev" 00:04:03.477 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:03.477 Message: lib/dmadev: Defining dependency "dmadev" 00:04:03.477 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:03.477 Message: lib/power: Defining dependency "power" 00:04:03.477 Message: lib/reorder: Defining dependency "reorder" 00:04:03.477 Message: lib/security: Defining dependency "security" 00:04:03.477 Has header "linux/userfaultfd.h" : YES 00:04:03.477 Has header "linux/vduse.h" : YES 00:04:03.477 Message: lib/vhost: Defining dependency "vhost" 00:04:03.477 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:03.477 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:04:03.477 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:03.477 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:03.477 Compiler for C supports arguments -std=c11: YES 00:04:03.477 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:04:03.477 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:04:03.477 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:04:03.477 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:04:03.477 Run-time dependency libmlx5 found: YES 1.24.44.0 00:04:03.477 Run-time dependency libibverbs found: YES 1.14.44.0 00:04:03.477 Library mtcr_ul found: NO 00:04:03.477 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:04:03.477 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:04:06.768 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:04:06.768 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:04:06.768 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:04:06.769 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:04:06.769 Configuring mlx5_autoconf.h using configuration 00:04:06.769 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:04:06.769 Run-time dependency libcrypto found: YES 3.0.9 00:04:06.769 Library IPSec_MB found: YES 00:04:06.769 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:04:06.769 Message: drivers/common/qat: Defining dependency "common_qat" 00:04:06.769 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:06.769 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:06.769 Library IPSec_MB found: YES 00:04:06.769 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:04:06.769 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:04:06.769 Compiler for C supports arguments -std=c11: YES (cached) 00:04:06.769 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:04:06.769 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:04:06.769 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:04:06.769 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:04:06.769 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:04:06.769 Run-time dependency libisal found: NO (tried pkgconfig) 00:04:06.769 Library libisal found: NO 00:04:06.769 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:04:06.769 Compiler for C supports arguments -std=c11: YES (cached) 00:04:06.769 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:04:06.769 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:04:06.769 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:04:06.769 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:04:06.769 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:04:06.769 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:06.769 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:06.769 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:06.769 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:06.769 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:06.769 Program doxygen found: YES (/usr/bin/doxygen) 00:04:06.769 Configuring doxy-api-html.conf using configuration 00:04:06.769 Configuring doxy-api-man.conf using configuration 00:04:06.769 Program mandb found: YES (/usr/bin/mandb) 00:04:06.769 Program sphinx-build found: NO 00:04:06.769 Configuring rte_build_config.h using configuration 00:04:06.769 Message: 00:04:06.769 ================= 00:04:06.769 Applications Enabled 00:04:06.769 ================= 00:04:06.769 00:04:06.769 apps: 00:04:06.769 00:04:06.769 00:04:06.769 Message: 00:04:06.769 ================= 00:04:06.769 Libraries Enabled 00:04:06.769 ================= 00:04:06.769 00:04:06.769 libs: 00:04:06.769 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:06.769 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:06.769 cryptodev, dmadev, power, reorder, security, vhost, 00:04:06.769 00:04:06.769 Message: 00:04:06.769 =============== 00:04:06.769 Drivers Enabled 00:04:06.769 =============== 00:04:06.769 00:04:06.769 common: 00:04:06.769 mlx5, qat, 00:04:06.769 bus: 00:04:06.769 auxiliary, pci, vdev, 00:04:06.769 mempool: 00:04:06.769 ring, 00:04:06.769 dma: 00:04:06.769 00:04:06.769 net: 00:04:06.769 00:04:06.769 crypto: 00:04:06.769 ipsec_mb, mlx5, 00:04:06.769 compress: 00:04:06.769 isal, mlx5, 00:04:06.769 vdpa: 00:04:06.769 00:04:06.769 00:04:06.769 Message: 00:04:06.769 ================= 00:04:06.769 Content Skipped 00:04:06.769 ================= 00:04:06.769 00:04:06.769 apps: 00:04:06.769 dumpcap: explicitly disabled via build config 00:04:06.769 graph: explicitly disabled via build config 00:04:06.769 pdump: explicitly disabled via build config 00:04:06.769 proc-info: explicitly disabled via build config 00:04:06.769 test-acl: explicitly disabled via build config 00:04:06.769 test-bbdev: explicitly disabled via build config 00:04:06.769 test-cmdline: explicitly disabled via build config 00:04:06.769 test-compress-perf: explicitly disabled via build config 00:04:06.769 test-crypto-perf: explicitly disabled via build config 00:04:06.769 test-dma-perf: explicitly disabled via build config 00:04:06.769 test-eventdev: explicitly disabled via build config 00:04:06.769 test-fib: explicitly disabled via build config 00:04:06.769 test-flow-perf: explicitly disabled via build config 00:04:06.769 test-gpudev: explicitly disabled via build config 00:04:06.769 test-mldev: explicitly disabled via build config 00:04:06.769 test-pipeline: explicitly disabled via build config 00:04:06.769 test-pmd: explicitly disabled via build config 00:04:06.769 test-regex: explicitly disabled via build config 00:04:06.769 test-sad: explicitly disabled via build config 00:04:06.769 test-security-perf: explicitly disabled via build config 00:04:06.769 00:04:06.769 libs: 00:04:06.769 argparse: explicitly disabled via build config 00:04:06.769 metrics: explicitly disabled via build config 00:04:06.769 acl: explicitly disabled via build config 00:04:06.769 bbdev: explicitly disabled via build config 00:04:06.769 bitratestats: explicitly disabled via build config 00:04:06.769 bpf: explicitly disabled via build config 00:04:06.769 cfgfile: explicitly disabled via build config 00:04:06.769 distributor: explicitly disabled via build config 00:04:06.769 efd: explicitly disabled via build config 00:04:06.769 eventdev: explicitly disabled via build config 00:04:06.769 dispatcher: explicitly disabled via build config 00:04:06.769 gpudev: explicitly disabled via build config 00:04:06.769 gro: explicitly disabled via build config 00:04:06.769 gso: explicitly disabled via build config 00:04:06.769 ip_frag: explicitly disabled via build config 00:04:06.769 jobstats: explicitly disabled via build config 00:04:06.769 latencystats: explicitly disabled via build config 00:04:06.769 lpm: explicitly disabled via build config 00:04:06.769 member: explicitly disabled via build config 00:04:06.769 pcapng: explicitly disabled via build config 00:04:06.769 rawdev: explicitly disabled via build config 00:04:06.769 regexdev: explicitly disabled via build config 00:04:06.769 mldev: explicitly disabled via build config 00:04:06.769 rib: explicitly disabled via build config 00:04:06.769 sched: explicitly disabled via build config 00:04:06.769 stack: explicitly disabled via build config 00:04:06.769 ipsec: explicitly disabled via build config 00:04:06.769 pdcp: explicitly disabled via build config 00:04:06.769 fib: explicitly disabled via build config 00:04:06.769 port: explicitly disabled via build config 00:04:06.769 pdump: explicitly disabled via build config 00:04:06.769 table: explicitly disabled via build config 00:04:06.769 pipeline: explicitly disabled via build config 00:04:06.769 graph: explicitly disabled via build config 00:04:06.769 node: explicitly disabled via build config 00:04:06.769 00:04:06.769 drivers: 00:04:06.769 common/cpt: not in enabled drivers build config 00:04:06.769 common/dpaax: not in enabled drivers build config 00:04:06.769 common/iavf: not in enabled drivers build config 00:04:06.769 common/idpf: not in enabled drivers build config 00:04:06.769 common/ionic: not in enabled drivers build config 00:04:06.769 common/mvep: not in enabled drivers build config 00:04:06.769 common/octeontx: not in enabled drivers build config 00:04:06.769 bus/cdx: not in enabled drivers build config 00:04:06.770 bus/dpaa: not in enabled drivers build config 00:04:06.770 bus/fslmc: not in enabled drivers build config 00:04:06.770 bus/ifpga: not in enabled drivers build config 00:04:06.770 bus/platform: not in enabled drivers build config 00:04:06.770 bus/uacce: not in enabled drivers build config 00:04:06.770 bus/vmbus: not in enabled drivers build config 00:04:06.770 common/cnxk: not in enabled drivers build config 00:04:06.770 common/nfp: not in enabled drivers build config 00:04:06.770 common/nitrox: not in enabled drivers build config 00:04:06.770 common/sfc_efx: not in enabled drivers build config 00:04:06.770 mempool/bucket: not in enabled drivers build config 00:04:06.770 mempool/cnxk: not in enabled drivers build config 00:04:06.770 mempool/dpaa: not in enabled drivers build config 00:04:06.770 mempool/dpaa2: not in enabled drivers build config 00:04:06.770 mempool/octeontx: not in enabled drivers build config 00:04:06.770 mempool/stack: not in enabled drivers build config 00:04:06.770 dma/cnxk: not in enabled drivers build config 00:04:06.770 dma/dpaa: not in enabled drivers build config 00:04:06.770 dma/dpaa2: not in enabled drivers build config 00:04:06.770 dma/hisilicon: not in enabled drivers build config 00:04:06.770 dma/idxd: not in enabled drivers build config 00:04:06.770 dma/ioat: not in enabled drivers build config 00:04:06.770 dma/skeleton: not in enabled drivers build config 00:04:06.770 net/af_packet: not in enabled drivers build config 00:04:06.770 net/af_xdp: not in enabled drivers build config 00:04:06.770 net/ark: not in enabled drivers build config 00:04:06.770 net/atlantic: not in enabled drivers build config 00:04:06.770 net/avp: not in enabled drivers build config 00:04:06.770 net/axgbe: not in enabled drivers build config 00:04:06.770 net/bnx2x: not in enabled drivers build config 00:04:06.770 net/bnxt: not in enabled drivers build config 00:04:06.770 net/bonding: not in enabled drivers build config 00:04:06.770 net/cnxk: not in enabled drivers build config 00:04:06.770 net/cpfl: not in enabled drivers build config 00:04:06.770 net/cxgbe: not in enabled drivers build config 00:04:06.770 net/dpaa: not in enabled drivers build config 00:04:06.770 net/dpaa2: not in enabled drivers build config 00:04:06.770 net/e1000: not in enabled drivers build config 00:04:06.770 net/ena: not in enabled drivers build config 00:04:06.770 net/enetc: not in enabled drivers build config 00:04:06.770 net/enetfec: not in enabled drivers build config 00:04:06.770 net/enic: not in enabled drivers build config 00:04:06.770 net/failsafe: not in enabled drivers build config 00:04:06.770 net/fm10k: not in enabled drivers build config 00:04:06.770 net/gve: not in enabled drivers build config 00:04:06.770 net/hinic: not in enabled drivers build config 00:04:06.770 net/hns3: not in enabled drivers build config 00:04:06.770 net/i40e: not in enabled drivers build config 00:04:06.770 net/iavf: not in enabled drivers build config 00:04:06.770 net/ice: not in enabled drivers build config 00:04:06.770 net/idpf: not in enabled drivers build config 00:04:06.770 net/igc: not in enabled drivers build config 00:04:06.770 net/ionic: not in enabled drivers build config 00:04:06.770 net/ipn3ke: not in enabled drivers build config 00:04:06.770 net/ixgbe: not in enabled drivers build config 00:04:06.770 net/mana: not in enabled drivers build config 00:04:06.770 net/memif: not in enabled drivers build config 00:04:06.770 net/mlx4: not in enabled drivers build config 00:04:06.770 net/mlx5: not in enabled drivers build config 00:04:06.770 net/mvneta: not in enabled drivers build config 00:04:06.770 net/mvpp2: not in enabled drivers build config 00:04:06.770 net/netvsc: not in enabled drivers build config 00:04:06.770 net/nfb: not in enabled drivers build config 00:04:06.770 net/nfp: not in enabled drivers build config 00:04:06.770 net/ngbe: not in enabled drivers build config 00:04:06.770 net/null: not in enabled drivers build config 00:04:06.770 net/octeontx: not in enabled drivers build config 00:04:06.770 net/octeon_ep: not in enabled drivers build config 00:04:06.770 net/pcap: not in enabled drivers build config 00:04:06.770 net/pfe: not in enabled drivers build config 00:04:06.770 net/qede: not in enabled drivers build config 00:04:06.770 net/ring: not in enabled drivers build config 00:04:06.770 net/sfc: not in enabled drivers build config 00:04:06.770 net/softnic: not in enabled drivers build config 00:04:06.770 net/tap: not in enabled drivers build config 00:04:06.770 net/thunderx: not in enabled drivers build config 00:04:06.770 net/txgbe: not in enabled drivers build config 00:04:06.770 net/vdev_netvsc: not in enabled drivers build config 00:04:06.770 net/vhost: not in enabled drivers build config 00:04:06.770 net/virtio: not in enabled drivers build config 00:04:06.770 net/vmxnet3: not in enabled drivers build config 00:04:06.770 raw/*: missing internal dependency, "rawdev" 00:04:06.770 crypto/armv8: not in enabled drivers build config 00:04:06.770 crypto/bcmfs: not in enabled drivers build config 00:04:06.770 crypto/caam_jr: not in enabled drivers build config 00:04:06.770 crypto/ccp: not in enabled drivers build config 00:04:06.770 crypto/cnxk: not in enabled drivers build config 00:04:06.770 crypto/dpaa_sec: not in enabled drivers build config 00:04:06.770 crypto/dpaa2_sec: not in enabled drivers build config 00:04:06.770 crypto/mvsam: not in enabled drivers build config 00:04:06.770 crypto/nitrox: not in enabled drivers build config 00:04:06.770 crypto/null: not in enabled drivers build config 00:04:06.770 crypto/octeontx: not in enabled drivers build config 00:04:06.770 crypto/openssl: not in enabled drivers build config 00:04:06.770 crypto/scheduler: not in enabled drivers build config 00:04:06.770 crypto/uadk: not in enabled drivers build config 00:04:06.770 crypto/virtio: not in enabled drivers build config 00:04:06.770 compress/nitrox: not in enabled drivers build config 00:04:06.770 compress/octeontx: not in enabled drivers build config 00:04:06.770 compress/zlib: not in enabled drivers build config 00:04:06.770 regex/*: missing internal dependency, "regexdev" 00:04:06.770 ml/*: missing internal dependency, "mldev" 00:04:06.770 vdpa/ifc: not in enabled drivers build config 00:04:06.770 vdpa/mlx5: not in enabled drivers build config 00:04:06.770 vdpa/nfp: not in enabled drivers build config 00:04:06.770 vdpa/sfc: not in enabled drivers build config 00:04:06.770 event/*: missing internal dependency, "eventdev" 00:04:06.770 baseband/*: missing internal dependency, "bbdev" 00:04:06.770 gpu/*: missing internal dependency, "gpudev" 00:04:06.770 00:04:06.770 00:04:07.029 Build targets in project: 115 00:04:07.029 00:04:07.029 DPDK 24.03.0 00:04:07.029 00:04:07.029 User defined options 00:04:07.029 buildtype : debug 00:04:07.029 default_library : shared 00:04:07.029 libdir : lib 00:04:07.029 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:04:07.029 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:04:07.029 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:04:07.029 cpu_instruction_set: native 00:04:07.029 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:04:07.029 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:04:07.029 enable_docs : false 00:04:07.029 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:04:07.029 enable_kmods : false 00:04:07.029 tests : false 00:04:07.029 00:04:07.029 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:07.614 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:04:07.614 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:07.614 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:07.614 [3/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:07.614 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:07.614 [5/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:07.614 [6/378] Linking static target lib/librte_kvargs.a 00:04:07.881 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:07.881 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:07.881 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:07.881 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:07.881 [11/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:07.881 [12/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:07.881 [13/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:07.881 [14/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:07.881 [15/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:07.881 [16/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:07.881 [17/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:07.881 [18/378] Linking static target lib/librte_log.a 00:04:07.881 [19/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:07.881 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:07.881 [21/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:07.881 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:07.881 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:07.881 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:07.881 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:07.881 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:07.881 [27/378] Linking static target lib/librte_pci.a 00:04:07.881 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:08.145 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:08.145 [30/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:08.145 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:08.145 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:08.145 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:08.145 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:08.145 [35/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:08.145 [36/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:08.145 [37/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:08.412 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:08.412 [39/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:08.412 [40/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.412 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:08.412 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:08.412 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:08.412 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:08.412 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:08.412 [46/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:08.412 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:08.412 [48/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:08.412 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:08.412 [50/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:08.412 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:08.412 [52/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:08.412 [53/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:08.412 [54/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:08.412 [55/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:08.412 [56/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:08.412 [57/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:08.412 [58/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:08.412 [59/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.412 [60/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:08.412 [61/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:08.412 [62/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:08.412 [63/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:08.412 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:08.412 [65/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:08.412 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:08.412 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:08.412 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:08.412 [69/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:08.412 [70/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:08.412 [71/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:08.412 [72/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:08.412 [73/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:08.412 [74/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:08.412 [75/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:08.412 [76/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:08.412 [77/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:08.412 [78/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:08.412 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:08.412 [80/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:08.412 [81/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:08.412 [82/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:08.412 [83/378] Linking static target lib/librte_meter.a 00:04:08.412 [84/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:08.412 [85/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:08.412 [86/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:08.412 [87/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:08.412 [88/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:08.412 [89/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:08.413 [90/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:08.413 [91/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:08.413 [92/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:08.413 [93/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:08.413 [94/378] Linking static target lib/librte_telemetry.a 00:04:08.413 [95/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:08.413 [96/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:08.413 [97/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:08.413 [98/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:08.413 [99/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:08.413 [100/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:08.413 [101/378] Linking static target lib/librte_ring.a 00:04:08.413 [102/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:08.413 [103/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:08.679 [104/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:08.679 [105/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:08.679 [106/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:08.679 [107/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:08.679 [108/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:08.679 [109/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:08.679 [110/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:08.679 [111/378] Linking static target lib/librte_cmdline.a 00:04:08.679 [112/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:08.679 [113/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:04:08.679 [114/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:08.679 [115/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:08.679 [116/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:08.679 [117/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:08.679 [118/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:08.680 [119/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:08.680 [120/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:08.680 [121/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:08.680 [122/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:08.680 [123/378] Linking static target lib/librte_timer.a 00:04:08.680 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:08.680 [125/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:08.680 [126/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:08.680 [127/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:08.680 [128/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:08.680 [129/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:08.680 [130/378] Linking static target lib/librte_rcu.a 00:04:08.680 [131/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:04:08.680 [132/378] Linking static target lib/librte_mempool.a 00:04:08.680 [133/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:08.680 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:08.680 [135/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:08.680 [136/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:08.680 [137/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:08.680 [138/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:08.680 [139/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:08.939 [140/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:08.939 [141/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:08.939 [142/378] Linking static target lib/librte_net.a 00:04:08.939 [143/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:08.939 [144/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:08.939 [145/378] Linking static target lib/librte_eal.a 00:04:08.939 [146/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:08.939 [147/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:08.939 [148/378] Linking static target lib/librte_dmadev.a 00:04:08.939 [149/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:08.939 [150/378] Linking static target lib/librte_compressdev.a 00:04:08.939 [151/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:08.939 [152/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:08.939 [153/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:08.939 [154/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:08.939 [155/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:08.939 [156/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.939 [157/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:08.939 [158/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:08.939 [159/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:08.939 [160/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:04:08.939 [161/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.939 [162/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:08.939 [163/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:08.939 [164/378] Linking target lib/librte_log.so.24.1 00:04:08.939 [165/378] Linking static target lib/librte_mbuf.a 00:04:09.198 [166/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:04:09.198 [167/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.198 [168/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:09.198 [169/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:09.198 [170/378] Linking static target lib/librte_reorder.a 00:04:09.198 [171/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.198 [172/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:09.198 [173/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:09.198 [174/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.198 [175/378] Linking static target lib/librte_security.a 00:04:09.198 [176/378] Linking static target lib/librte_power.a 00:04:09.198 [177/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:04:09.198 [178/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:09.198 [179/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:09.198 [180/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:04:09.198 [181/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:09.198 [182/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.198 [183/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:09.198 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:04:09.198 [185/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.198 [186/378] Linking target lib/librte_kvargs.so.24.1 00:04:09.198 [187/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:09.457 [188/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:09.457 [189/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:09.457 [190/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:04:09.457 [191/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:09.457 [192/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:04:09.457 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:04:09.457 [194/378] Linking static target lib/librte_hash.a 00:04:09.457 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:04:09.457 [196/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:04:09.457 [197/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:09.457 [198/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:09.457 [199/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:09.457 [200/378] Linking target lib/librte_telemetry.so.24.1 00:04:09.457 [201/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:04:09.457 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:04:09.457 [203/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:04:09.457 [204/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:04:09.457 [205/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:04:09.457 [206/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:09.457 [207/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:04:09.457 [208/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:04:09.457 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:04:09.457 [210/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:04:09.457 [211/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:09.457 [212/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:04:09.457 [213/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:04:09.458 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:04:09.458 [215/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:09.458 [216/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:04:09.458 [217/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:04:09.458 [218/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:09.458 [219/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:09.458 [220/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:09.458 [221/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:04:09.458 [222/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:09.458 [223/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:04:09.458 [224/378] Linking static target lib/librte_cryptodev.a 00:04:09.458 [225/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:04:09.458 [226/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:04:09.458 [227/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:04:09.458 [228/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:04:09.458 [229/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:04:09.458 [230/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:04:09.458 [231/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:04:09.458 [232/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:04:09.458 [233/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:04:09.458 [234/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:04:09.458 [235/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:09.458 [236/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:04:09.458 [237/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:04:09.458 [238/378] Linking static target drivers/librte_bus_auxiliary.a 00:04:09.458 [239/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.458 [240/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:09.458 [241/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:04:09.716 [242/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:09.716 [243/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:04:09.716 [244/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:09.716 [245/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:09.716 [246/378] Linking static target drivers/librte_bus_vdev.a 00:04:09.716 [247/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:09.716 [248/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:04:09.716 [249/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:04:09.716 [250/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.716 [251/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:04:09.717 [252/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:04:09.717 [253/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:04:09.717 [254/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.717 [255/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:04:09.717 [256/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:04:09.717 [257/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:09.717 [258/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:09.717 [259/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:04:09.717 [260/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:04:09.717 [261/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:09.717 [262/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:09.717 [263/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:09.717 [264/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:04:09.717 [265/378] Linking static target drivers/librte_bus_pci.a 00:04:09.717 [266/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.717 [267/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:04:09.717 [268/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:04:09.717 [269/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:04:09.717 [270/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:04:09.717 [271/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:04:09.717 [272/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:04:09.717 [273/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:04:09.717 [274/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:04:09.717 [275/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:04:09.717 [276/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:04:09.717 [277/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:09.717 [278/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:04:09.717 [279/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:04:09.717 [280/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:04:09.975 [281/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:09.975 [282/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:09.975 [283/378] Linking static target drivers/librte_mempool_ring.a 00:04:09.975 [284/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.975 [285/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.975 [286/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.975 [287/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:04:09.975 [288/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:04:09.975 [289/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.975 [290/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:04:09.975 [291/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:04:09.975 [292/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.975 [293/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:04:09.975 [294/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:04:09.975 [295/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:04:09.975 [296/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:04:09.975 [297/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:09.975 [298/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:04:09.975 [299/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:04:09.975 [300/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:04:09.975 [301/378] Linking static target drivers/librte_compress_mlx5.a 00:04:09.975 [302/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:04:09.975 [303/378] Linking static target drivers/librte_crypto_mlx5.a 00:04:09.975 [304/378] Linking static target drivers/librte_compress_isal.a 00:04:09.975 [305/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:04:10.234 [306/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:04:10.234 [307/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:04:10.234 [308/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:04:10.234 [309/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.234 [310/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:10.234 [311/378] Linking static target lib/librte_ethdev.a 00:04:10.234 [312/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:04:10.234 [313/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:04:10.234 [314/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:04:10.234 [315/378] Linking static target drivers/librte_common_mlx5.a 00:04:10.234 [316/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.493 [317/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.752 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:04:10.752 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:04:11.010 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:04:11.010 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:04:11.010 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:04:11.268 [323/378] Linking static target drivers/librte_common_qat.a 00:04:11.527 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:11.785 [325/378] Linking static target lib/librte_vhost.a 00:04:11.785 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.320 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.222 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.412 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.789 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.789 [331/378] Linking target lib/librte_eal.so.24.1 00:04:22.048 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:22.048 [333/378] Linking target lib/librte_pci.so.24.1 00:04:22.048 [334/378] Linking target lib/librte_meter.so.24.1 00:04:22.048 [335/378] Linking target lib/librte_ring.so.24.1 00:04:22.048 [336/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:04:22.048 [337/378] Linking target lib/librte_dmadev.so.24.1 00:04:22.048 [338/378] Linking target lib/librte_timer.so.24.1 00:04:22.048 [339/378] Linking target drivers/librte_bus_vdev.so.24.1 00:04:22.048 [340/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:22.307 [341/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:04:22.307 [342/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:22.307 [343/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:04:22.307 [344/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:22.307 [345/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:22.307 [346/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:22.307 [347/378] Linking target drivers/librte_bus_pci.so.24.1 00:04:22.307 [348/378] Linking target lib/librte_mempool.so.24.1 00:04:22.307 [349/378] Linking target lib/librte_rcu.so.24.1 00:04:22.307 [350/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:22.307 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:22.307 [352/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:04:22.566 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:04:22.566 [354/378] Linking target lib/librte_mbuf.so.24.1 00:04:22.566 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:22.566 [356/378] Linking target lib/librte_reorder.so.24.1 00:04:22.566 [357/378] Linking target lib/librte_net.so.24.1 00:04:22.566 [358/378] Linking target lib/librte_compressdev.so.24.1 00:04:22.566 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:04:22.824 [360/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:22.824 [361/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:04:22.824 [362/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:22.824 [363/378] Linking target lib/librte_security.so.24.1 00:04:22.824 [364/378] Linking target lib/librte_hash.so.24.1 00:04:22.824 [365/378] Linking target lib/librte_cmdline.so.24.1 00:04:22.824 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:04:22.824 [367/378] Linking target lib/librte_ethdev.so.24.1 00:04:23.083 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:23.083 [369/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:04:23.083 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:23.083 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:04:23.083 [372/378] Linking target lib/librte_power.so.24.1 00:04:23.083 [373/378] Linking target lib/librte_vhost.so.24.1 00:04:23.343 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:04:23.343 [375/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:04:23.343 [376/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:04:23.343 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:04:23.343 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:04:23.343 INFO: autodetecting backend as ninja 00:04:23.343 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:04:24.778 CC lib/ut/ut.o 00:04:24.778 CC lib/log/log.o 00:04:24.778 CC lib/log/log_flags.o 00:04:24.778 CC lib/log/log_deprecated.o 00:04:24.778 CC lib/ut_mock/mock.o 00:04:24.778 LIB libspdk_ut_mock.a 00:04:24.778 LIB libspdk_log.a 00:04:24.778 LIB libspdk_ut.a 00:04:24.778 SO libspdk_ut_mock.so.6.0 00:04:24.778 SO libspdk_log.so.7.0 00:04:24.778 SO libspdk_ut.so.2.0 00:04:24.778 SYMLINK libspdk_ut_mock.so 00:04:24.778 SYMLINK libspdk_ut.so 00:04:24.778 SYMLINK libspdk_log.so 00:04:25.347 CC lib/ioat/ioat.o 00:04:25.347 CC lib/util/bit_array.o 00:04:25.347 CC lib/util/base64.o 00:04:25.347 CC lib/util/crc16.o 00:04:25.347 CC lib/util/cpuset.o 00:04:25.347 CC lib/util/crc32.o 00:04:25.347 CC lib/util/crc32c.o 00:04:25.347 CXX lib/trace_parser/trace.o 00:04:25.347 CC lib/util/crc32_ieee.o 00:04:25.347 CC lib/util/crc64.o 00:04:25.347 CC lib/util/dif.o 00:04:25.347 CC lib/util/fd.o 00:04:25.347 CC lib/util/file.o 00:04:25.347 CC lib/dma/dma.o 00:04:25.347 CC lib/util/hexlify.o 00:04:25.347 CC lib/util/iov.o 00:04:25.347 CC lib/util/math.o 00:04:25.347 CC lib/util/pipe.o 00:04:25.347 CC lib/util/string.o 00:04:25.347 CC lib/util/strerror_tls.o 00:04:25.347 CC lib/util/uuid.o 00:04:25.347 CC lib/util/fd_group.o 00:04:25.347 CC lib/util/xor.o 00:04:25.347 CC lib/util/zipf.o 00:04:25.347 CC lib/vfio_user/host/vfio_user.o 00:04:25.347 CC lib/vfio_user/host/vfio_user_pci.o 00:04:25.347 LIB libspdk_dma.a 00:04:25.606 SO libspdk_dma.so.4.0 00:04:25.606 LIB libspdk_ioat.a 00:04:25.606 SO libspdk_ioat.so.7.0 00:04:25.606 SYMLINK libspdk_dma.so 00:04:25.606 SYMLINK libspdk_ioat.so 00:04:25.606 LIB libspdk_vfio_user.a 00:04:25.606 SO libspdk_vfio_user.so.5.0 00:04:25.864 LIB libspdk_util.a 00:04:25.864 SYMLINK libspdk_vfio_user.so 00:04:25.864 SO libspdk_util.so.9.1 00:04:25.864 LIB libspdk_trace_parser.a 00:04:25.864 SO libspdk_trace_parser.so.5.0 00:04:26.123 SYMLINK libspdk_util.so 00:04:26.123 SYMLINK libspdk_trace_parser.so 00:04:26.381 CC lib/vmd/vmd.o 00:04:26.381 CC lib/vmd/led.o 00:04:26.381 CC lib/env_dpdk/env.o 00:04:26.381 CC lib/env_dpdk/pci.o 00:04:26.381 CC lib/env_dpdk/memory.o 00:04:26.381 CC lib/reduce/reduce.o 00:04:26.381 CC lib/env_dpdk/init.o 00:04:26.381 CC lib/env_dpdk/pci_ioat.o 00:04:26.381 CC lib/env_dpdk/threads.o 00:04:26.381 CC lib/env_dpdk/pci_idxd.o 00:04:26.381 CC lib/env_dpdk/pci_virtio.o 00:04:26.381 CC lib/env_dpdk/pci_vmd.o 00:04:26.381 CC lib/env_dpdk/pci_dpdk.o 00:04:26.381 CC lib/env_dpdk/pci_event.o 00:04:26.381 CC lib/env_dpdk/sigbus_handler.o 00:04:26.381 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:26.381 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:26.381 CC lib/rdma/common.o 00:04:26.381 CC lib/rdma/rdma_verbs.o 00:04:26.381 CC lib/idxd/idxd.o 00:04:26.381 CC lib/idxd/idxd_user.o 00:04:26.381 CC lib/idxd/idxd_kernel.o 00:04:26.381 CC lib/json/json_parse.o 00:04:26.381 CC lib/conf/conf.o 00:04:26.381 CC lib/json/json_util.o 00:04:26.381 CC lib/json/json_write.o 00:04:26.639 LIB libspdk_conf.a 00:04:26.639 LIB libspdk_json.a 00:04:26.639 LIB libspdk_rdma.a 00:04:26.639 SO libspdk_conf.so.6.0 00:04:26.639 SO libspdk_json.so.6.0 00:04:26.639 SO libspdk_rdma.so.6.0 00:04:26.639 SYMLINK libspdk_conf.so 00:04:26.898 SYMLINK libspdk_json.so 00:04:26.898 SYMLINK libspdk_rdma.so 00:04:26.898 LIB libspdk_reduce.a 00:04:26.898 SO libspdk_reduce.so.6.0 00:04:26.898 LIB libspdk_idxd.a 00:04:26.898 SO libspdk_idxd.so.12.0 00:04:26.898 SYMLINK libspdk_reduce.so 00:04:26.898 LIB libspdk_vmd.a 00:04:27.156 SO libspdk_vmd.so.6.0 00:04:27.156 SYMLINK libspdk_idxd.so 00:04:27.156 SYMLINK libspdk_vmd.so 00:04:27.156 CC lib/jsonrpc/jsonrpc_server.o 00:04:27.156 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:27.156 CC lib/jsonrpc/jsonrpc_client.o 00:04:27.156 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:27.414 LIB libspdk_jsonrpc.a 00:04:27.414 SO libspdk_jsonrpc.so.6.0 00:04:27.670 SYMLINK libspdk_jsonrpc.so 00:04:27.670 LIB libspdk_env_dpdk.a 00:04:27.670 SO libspdk_env_dpdk.so.14.1 00:04:27.928 SYMLINK libspdk_env_dpdk.so 00:04:27.928 CC lib/rpc/rpc.o 00:04:28.185 LIB libspdk_rpc.a 00:04:28.185 SO libspdk_rpc.so.6.0 00:04:28.185 SYMLINK libspdk_rpc.so 00:04:28.752 CC lib/notify/notify.o 00:04:28.753 CC lib/notify/notify_rpc.o 00:04:28.753 CC lib/keyring/keyring.o 00:04:28.753 CC lib/keyring/keyring_rpc.o 00:04:28.753 CC lib/trace/trace.o 00:04:28.753 CC lib/trace/trace_rpc.o 00:04:28.753 CC lib/trace/trace_flags.o 00:04:28.753 LIB libspdk_notify.a 00:04:29.011 SO libspdk_notify.so.6.0 00:04:29.011 LIB libspdk_keyring.a 00:04:29.011 LIB libspdk_trace.a 00:04:29.011 SYMLINK libspdk_notify.so 00:04:29.011 SO libspdk_keyring.so.1.0 00:04:29.011 SO libspdk_trace.so.10.0 00:04:29.011 SYMLINK libspdk_keyring.so 00:04:29.011 SYMLINK libspdk_trace.so 00:04:29.579 CC lib/sock/sock.o 00:04:29.579 CC lib/sock/sock_rpc.o 00:04:29.579 CC lib/thread/thread.o 00:04:29.579 CC lib/thread/iobuf.o 00:04:29.837 LIB libspdk_sock.a 00:04:29.837 SO libspdk_sock.so.10.0 00:04:30.096 SYMLINK libspdk_sock.so 00:04:30.355 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:30.355 CC lib/nvme/nvme_ctrlr.o 00:04:30.355 CC lib/nvme/nvme_fabric.o 00:04:30.355 CC lib/nvme/nvme_ns_cmd.o 00:04:30.355 CC lib/nvme/nvme_ns.o 00:04:30.355 CC lib/nvme/nvme_pcie_common.o 00:04:30.355 CC lib/nvme/nvme_pcie.o 00:04:30.355 CC lib/nvme/nvme_qpair.o 00:04:30.355 CC lib/nvme/nvme.o 00:04:30.355 CC lib/nvme/nvme_quirks.o 00:04:30.355 CC lib/nvme/nvme_transport.o 00:04:30.355 CC lib/nvme/nvme_discovery.o 00:04:30.355 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:30.355 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:30.355 CC lib/nvme/nvme_tcp.o 00:04:30.355 CC lib/nvme/nvme_opal.o 00:04:30.355 CC lib/nvme/nvme_io_msg.o 00:04:30.355 CC lib/nvme/nvme_poll_group.o 00:04:30.355 CC lib/nvme/nvme_zns.o 00:04:30.355 CC lib/nvme/nvme_stubs.o 00:04:30.355 CC lib/nvme/nvme_auth.o 00:04:30.355 CC lib/nvme/nvme_cuse.o 00:04:30.355 CC lib/nvme/nvme_rdma.o 00:04:30.922 LIB libspdk_thread.a 00:04:30.922 SO libspdk_thread.so.10.1 00:04:30.922 SYMLINK libspdk_thread.so 00:04:31.488 CC lib/blob/blobstore.o 00:04:31.488 CC lib/blob/request.o 00:04:31.488 CC lib/blob/blob_bs_dev.o 00:04:31.488 CC lib/init/json_config.o 00:04:31.488 CC lib/blob/zeroes.o 00:04:31.488 CC lib/init/subsystem.o 00:04:31.488 CC lib/accel/accel_rpc.o 00:04:31.488 CC lib/init/subsystem_rpc.o 00:04:31.488 CC lib/accel/accel.o 00:04:31.488 CC lib/virtio/virtio.o 00:04:31.488 CC lib/init/rpc.o 00:04:31.488 CC lib/virtio/virtio_vhost_user.o 00:04:31.488 CC lib/accel/accel_sw.o 00:04:31.488 CC lib/virtio/virtio_vfio_user.o 00:04:31.489 CC lib/virtio/virtio_pci.o 00:04:31.489 LIB libspdk_init.a 00:04:31.747 SO libspdk_init.so.5.0 00:04:31.747 LIB libspdk_virtio.a 00:04:31.747 SYMLINK libspdk_init.so 00:04:31.747 SO libspdk_virtio.so.7.0 00:04:31.747 SYMLINK libspdk_virtio.so 00:04:32.006 CC lib/event/app.o 00:04:32.006 CC lib/event/reactor.o 00:04:32.006 CC lib/event/log_rpc.o 00:04:32.006 CC lib/event/app_rpc.o 00:04:32.006 CC lib/event/scheduler_static.o 00:04:32.266 LIB libspdk_accel.a 00:04:32.266 LIB libspdk_nvme.a 00:04:32.266 SO libspdk_accel.so.15.0 00:04:32.525 SYMLINK libspdk_accel.so 00:04:32.525 SO libspdk_nvme.so.13.0 00:04:32.525 LIB libspdk_event.a 00:04:32.525 SO libspdk_event.so.13.1 00:04:32.784 SYMLINK libspdk_event.so 00:04:32.784 SYMLINK libspdk_nvme.so 00:04:32.784 CC lib/bdev/bdev.o 00:04:32.784 CC lib/bdev/bdev_rpc.o 00:04:32.784 CC lib/bdev/bdev_zone.o 00:04:32.784 CC lib/bdev/part.o 00:04:32.784 CC lib/bdev/scsi_nvme.o 00:04:34.161 LIB libspdk_blob.a 00:04:34.161 SO libspdk_blob.so.11.0 00:04:34.421 SYMLINK libspdk_blob.so 00:04:34.680 CC lib/blobfs/blobfs.o 00:04:34.680 CC lib/blobfs/tree.o 00:04:34.680 CC lib/lvol/lvol.o 00:04:35.248 LIB libspdk_bdev.a 00:04:35.248 SO libspdk_bdev.so.15.0 00:04:35.507 SYMLINK libspdk_bdev.so 00:04:35.507 LIB libspdk_blobfs.a 00:04:35.507 SO libspdk_blobfs.so.10.0 00:04:35.507 LIB libspdk_lvol.a 00:04:35.507 SYMLINK libspdk_blobfs.so 00:04:35.507 SO libspdk_lvol.so.10.0 00:04:35.765 SYMLINK libspdk_lvol.so 00:04:35.765 CC lib/scsi/dev.o 00:04:35.765 CC lib/nbd/nbd.o 00:04:35.765 CC lib/scsi/lun.o 00:04:35.765 CC lib/nbd/nbd_rpc.o 00:04:35.765 CC lib/scsi/port.o 00:04:35.765 CC lib/scsi/scsi.o 00:04:35.765 CC lib/scsi/scsi_rpc.o 00:04:35.765 CC lib/scsi/scsi_bdev.o 00:04:35.765 CC lib/scsi/scsi_pr.o 00:04:35.765 CC lib/nvmf/ctrlr.o 00:04:35.765 CC lib/scsi/task.o 00:04:35.765 CC lib/nvmf/ctrlr_discovery.o 00:04:35.765 CC lib/nvmf/ctrlr_bdev.o 00:04:35.765 CC lib/nvmf/subsystem.o 00:04:35.765 CC lib/nvmf/nvmf.o 00:04:35.765 CC lib/nvmf/nvmf_rpc.o 00:04:35.765 CC lib/nvmf/transport.o 00:04:35.765 CC lib/nvmf/stubs.o 00:04:35.765 CC lib/nvmf/tcp.o 00:04:35.765 CC lib/nvmf/mdns_server.o 00:04:35.765 CC lib/nvmf/rdma.o 00:04:35.765 CC lib/ftl/ftl_core.o 00:04:35.765 CC lib/ftl/ftl_init.o 00:04:35.765 CC lib/nvmf/auth.o 00:04:35.765 CC lib/ublk/ublk.o 00:04:35.765 CC lib/ftl/ftl_layout.o 00:04:35.765 CC lib/ublk/ublk_rpc.o 00:04:35.765 CC lib/ftl/ftl_debug.o 00:04:35.765 CC lib/ftl/ftl_io.o 00:04:35.765 CC lib/ftl/ftl_sb.o 00:04:35.765 CC lib/ftl/ftl_l2p.o 00:04:35.765 CC lib/ftl/ftl_l2p_flat.o 00:04:35.765 CC lib/ftl/ftl_nv_cache.o 00:04:35.765 CC lib/ftl/ftl_band.o 00:04:35.765 CC lib/ftl/ftl_band_ops.o 00:04:35.765 CC lib/ftl/ftl_writer.o 00:04:35.765 CC lib/ftl/ftl_rq.o 00:04:35.765 CC lib/ftl/ftl_reloc.o 00:04:35.765 CC lib/ftl/ftl_l2p_cache.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:35.765 CC lib/ftl/ftl_p2l.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:35.765 CC lib/ftl/utils/ftl_conf.o 00:04:35.765 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:35.765 CC lib/ftl/utils/ftl_mempool.o 00:04:35.765 CC lib/ftl/utils/ftl_md.o 00:04:35.765 CC lib/ftl/utils/ftl_bitmap.o 00:04:35.765 CC lib/ftl/utils/ftl_property.o 00:04:35.765 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:35.765 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:35.765 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:35.765 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:35.765 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:35.765 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:35.765 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:35.765 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:35.765 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:35.765 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:35.765 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:35.765 CC lib/ftl/base/ftl_base_dev.o 00:04:35.765 CC lib/ftl/base/ftl_base_bdev.o 00:04:35.765 CC lib/ftl/ftl_trace.o 00:04:36.331 LIB libspdk_scsi.a 00:04:36.331 LIB libspdk_nbd.a 00:04:36.590 SO libspdk_scsi.so.9.0 00:04:36.590 SO libspdk_nbd.so.7.0 00:04:36.590 SYMLINK libspdk_nbd.so 00:04:36.590 LIB libspdk_ublk.a 00:04:36.590 SYMLINK libspdk_scsi.so 00:04:36.590 SO libspdk_ublk.so.3.0 00:04:36.590 SYMLINK libspdk_ublk.so 00:04:36.848 LIB libspdk_ftl.a 00:04:36.848 CC lib/vhost/vhost.o 00:04:36.848 CC lib/vhost/vhost_rpc.o 00:04:36.848 CC lib/iscsi/conn.o 00:04:36.848 CC lib/vhost/vhost_scsi.o 00:04:36.848 CC lib/iscsi/init_grp.o 00:04:36.848 CC lib/iscsi/iscsi.o 00:04:36.848 CC lib/vhost/vhost_blk.o 00:04:36.848 CC lib/iscsi/md5.o 00:04:36.848 CC lib/vhost/rte_vhost_user.o 00:04:36.848 CC lib/iscsi/param.o 00:04:36.848 CC lib/iscsi/portal_grp.o 00:04:36.848 CC lib/iscsi/tgt_node.o 00:04:36.848 CC lib/iscsi/iscsi_subsystem.o 00:04:36.848 CC lib/iscsi/iscsi_rpc.o 00:04:36.848 CC lib/iscsi/task.o 00:04:37.106 SO libspdk_ftl.so.9.0 00:04:37.365 SYMLINK libspdk_ftl.so 00:04:37.931 LIB libspdk_nvmf.a 00:04:37.931 SO libspdk_nvmf.so.19.0 00:04:37.931 LIB libspdk_vhost.a 00:04:37.931 SO libspdk_vhost.so.8.0 00:04:38.190 SYMLINK libspdk_nvmf.so 00:04:38.190 SYMLINK libspdk_vhost.so 00:04:38.190 LIB libspdk_iscsi.a 00:04:38.190 SO libspdk_iscsi.so.8.0 00:04:38.449 SYMLINK libspdk_iscsi.so 00:04:39.017 CC module/env_dpdk/env_dpdk_rpc.o 00:04:39.275 CC module/accel/ioat/accel_ioat.o 00:04:39.275 CC module/accel/ioat/accel_ioat_rpc.o 00:04:39.275 LIB libspdk_env_dpdk_rpc.a 00:04:39.275 CC module/blob/bdev/blob_bdev.o 00:04:39.275 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:39.275 CC module/keyring/file/keyring.o 00:04:39.275 CC module/keyring/file/keyring_rpc.o 00:04:39.275 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:04:39.275 CC module/keyring/linux/keyring.o 00:04:39.275 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:04:39.275 CC module/sock/posix/posix.o 00:04:39.275 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.275 CC module/accel/iaa/accel_iaa.o 00:04:39.275 CC module/keyring/linux/keyring_rpc.o 00:04:39.275 CC module/accel/dsa/accel_dsa.o 00:04:39.275 CC module/accel/dsa/accel_dsa_rpc.o 00:04:39.275 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.275 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:39.275 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:04:39.275 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:04:39.275 CC module/accel/error/accel_error.o 00:04:39.275 CC module/accel/error/accel_error_rpc.o 00:04:39.275 SO libspdk_env_dpdk_rpc.so.6.0 00:04:39.275 SYMLINK libspdk_env_dpdk_rpc.so 00:04:39.534 LIB libspdk_keyring_file.a 00:04:39.534 LIB libspdk_scheduler_gscheduler.a 00:04:39.534 LIB libspdk_keyring_linux.a 00:04:39.534 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.534 LIB libspdk_accel_ioat.a 00:04:39.534 LIB libspdk_accel_iaa.a 00:04:39.534 LIB libspdk_scheduler_dynamic.a 00:04:39.534 LIB libspdk_accel_error.a 00:04:39.534 SO libspdk_keyring_file.so.1.0 00:04:39.534 SO libspdk_scheduler_gscheduler.so.4.0 00:04:39.534 SO libspdk_accel_ioat.so.6.0 00:04:39.534 SO libspdk_keyring_linux.so.1.0 00:04:39.534 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.534 SO libspdk_accel_iaa.so.3.0 00:04:39.534 SO libspdk_scheduler_dynamic.so.4.0 00:04:39.534 SO libspdk_accel_error.so.2.0 00:04:39.534 LIB libspdk_blob_bdev.a 00:04:39.534 LIB libspdk_accel_dsa.a 00:04:39.534 SYMLINK libspdk_keyring_file.so 00:04:39.534 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.534 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:39.534 SO libspdk_blob_bdev.so.11.0 00:04:39.534 SYMLINK libspdk_accel_ioat.so 00:04:39.534 SYMLINK libspdk_keyring_linux.so 00:04:39.534 SYMLINK libspdk_scheduler_dynamic.so 00:04:39.534 SYMLINK libspdk_accel_iaa.so 00:04:39.534 SYMLINK libspdk_accel_error.so 00:04:39.534 SO libspdk_accel_dsa.so.5.0 00:04:39.534 SYMLINK libspdk_blob_bdev.so 00:04:39.534 SYMLINK libspdk_accel_dsa.so 00:04:40.101 LIB libspdk_sock_posix.a 00:04:40.101 LIB libspdk_accel_dpdk_compressdev.a 00:04:40.101 SO libspdk_sock_posix.so.6.0 00:04:40.101 SO libspdk_accel_dpdk_compressdev.so.3.0 00:04:40.101 SYMLINK libspdk_sock_posix.so 00:04:40.101 CC module/blobfs/bdev/blobfs_bdev.o 00:04:40.101 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:40.101 CC module/bdev/gpt/gpt.o 00:04:40.101 CC module/bdev/gpt/vbdev_gpt.o 00:04:40.101 CC module/bdev/aio/bdev_aio.o 00:04:40.101 CC module/bdev/delay/vbdev_delay.o 00:04:40.101 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:40.101 CC module/bdev/error/vbdev_error_rpc.o 00:04:40.101 CC module/bdev/error/vbdev_error.o 00:04:40.101 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.101 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.101 CC module/bdev/nvme/bdev_nvme.o 00:04:40.101 CC module/bdev/compress/vbdev_compress_rpc.o 00:04:40.101 CC module/bdev/compress/vbdev_compress.o 00:04:40.101 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:40.101 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.101 CC module/bdev/nvme/nvme_rpc.o 00:04:40.101 CC module/bdev/crypto/vbdev_crypto.o 00:04:40.101 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.101 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:04:40.101 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.101 CC module/bdev/raid/bdev_raid.o 00:04:40.101 CC module/bdev/nvme/vbdev_opal.o 00:04:40.101 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.101 CC module/bdev/null/bdev_null.o 00:04:40.101 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.101 CC module/bdev/null/bdev_null_rpc.o 00:04:40.101 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.101 CC module/bdev/split/vbdev_split.o 00:04:40.101 CC module/bdev/raid/raid0.o 00:04:40.101 CC module/bdev/ftl/bdev_ftl.o 00:04:40.101 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.101 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.101 CC module/bdev/raid/raid1.o 00:04:40.101 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.101 CC module/bdev/passthru/vbdev_passthru.o 00:04:40.101 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.101 CC module/bdev/raid/concat.o 00:04:40.101 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:40.101 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.101 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.101 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.101 CC module/bdev/malloc/bdev_malloc.o 00:04:40.101 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.101 SYMLINK libspdk_accel_dpdk_compressdev.so 00:04:40.101 CC module/bdev/lvol/vbdev_lvol.o 00:04:40.101 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:40.359 LIB libspdk_blobfs_bdev.a 00:04:40.359 SO libspdk_blobfs_bdev.so.6.0 00:04:40.359 LIB libspdk_bdev_error.a 00:04:40.618 LIB libspdk_bdev_split.a 00:04:40.618 LIB libspdk_bdev_gpt.a 00:04:40.618 SYMLINK libspdk_blobfs_bdev.so 00:04:40.618 SO libspdk_bdev_error.so.6.0 00:04:40.618 SO libspdk_bdev_split.so.6.0 00:04:40.618 LIB libspdk_bdev_null.a 00:04:40.618 LIB libspdk_bdev_crypto.a 00:04:40.618 SO libspdk_bdev_gpt.so.6.0 00:04:40.618 LIB libspdk_accel_dpdk_cryptodev.a 00:04:40.618 LIB libspdk_bdev_delay.a 00:04:40.618 LIB libspdk_bdev_ftl.a 00:04:40.618 SO libspdk_bdev_null.so.6.0 00:04:40.618 LIB libspdk_bdev_zone_block.a 00:04:40.618 SO libspdk_bdev_crypto.so.6.0 00:04:40.618 LIB libspdk_bdev_aio.a 00:04:40.618 LIB libspdk_bdev_passthru.a 00:04:40.618 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:04:40.618 SYMLINK libspdk_bdev_error.so 00:04:40.618 SO libspdk_bdev_delay.so.6.0 00:04:40.618 SYMLINK libspdk_bdev_split.so 00:04:40.618 LIB libspdk_bdev_malloc.a 00:04:40.618 SO libspdk_bdev_zone_block.so.6.0 00:04:40.618 SO libspdk_bdev_ftl.so.6.0 00:04:40.618 SYMLINK libspdk_bdev_gpt.so 00:04:40.618 SO libspdk_bdev_aio.so.6.0 00:04:40.618 SO libspdk_bdev_passthru.so.6.0 00:04:40.618 LIB libspdk_bdev_compress.a 00:04:40.618 LIB libspdk_bdev_iscsi.a 00:04:40.618 SYMLINK libspdk_bdev_null.so 00:04:40.618 SYMLINK libspdk_bdev_crypto.so 00:04:40.618 SO libspdk_bdev_malloc.so.6.0 00:04:40.618 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:04:40.618 SO libspdk_bdev_compress.so.6.0 00:04:40.618 SYMLINK libspdk_bdev_delay.so 00:04:40.618 SYMLINK libspdk_bdev_passthru.so 00:04:40.618 SO libspdk_bdev_iscsi.so.6.0 00:04:40.618 SYMLINK libspdk_bdev_ftl.so 00:04:40.618 SYMLINK libspdk_bdev_zone_block.so 00:04:40.618 SYMLINK libspdk_bdev_aio.so 00:04:40.618 SYMLINK libspdk_bdev_malloc.so 00:04:40.878 SYMLINK libspdk_bdev_iscsi.so 00:04:40.878 SYMLINK libspdk_bdev_compress.so 00:04:40.878 LIB libspdk_bdev_lvol.a 00:04:40.878 LIB libspdk_bdev_virtio.a 00:04:40.878 SO libspdk_bdev_lvol.so.6.0 00:04:40.878 SO libspdk_bdev_virtio.so.6.0 00:04:40.878 SYMLINK libspdk_bdev_lvol.so 00:04:40.878 SYMLINK libspdk_bdev_virtio.so 00:04:41.137 LIB libspdk_bdev_raid.a 00:04:41.137 SO libspdk_bdev_raid.so.6.0 00:04:41.396 SYMLINK libspdk_bdev_raid.so 00:04:42.331 LIB libspdk_bdev_nvme.a 00:04:42.331 SO libspdk_bdev_nvme.so.7.0 00:04:42.590 SYMLINK libspdk_bdev_nvme.so 00:04:43.157 CC module/event/subsystems/sock/sock.o 00:04:43.157 CC module/event/subsystems/iobuf/iobuf.o 00:04:43.157 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:43.157 CC module/event/subsystems/keyring/keyring.o 00:04:43.157 CC module/event/subsystems/scheduler/scheduler.o 00:04:43.157 CC module/event/subsystems/vmd/vmd.o 00:04:43.157 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:43.157 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:43.416 LIB libspdk_event_sock.a 00:04:43.416 LIB libspdk_event_keyring.a 00:04:43.416 LIB libspdk_event_vmd.a 00:04:43.416 LIB libspdk_event_scheduler.a 00:04:43.416 LIB libspdk_event_vhost_blk.a 00:04:43.416 LIB libspdk_event_iobuf.a 00:04:43.416 SO libspdk_event_sock.so.5.0 00:04:43.416 SO libspdk_event_keyring.so.1.0 00:04:43.416 SO libspdk_event_scheduler.so.4.0 00:04:43.416 SO libspdk_event_vhost_blk.so.3.0 00:04:43.416 SO libspdk_event_iobuf.so.3.0 00:04:43.416 SO libspdk_event_vmd.so.6.0 00:04:43.416 SYMLINK libspdk_event_sock.so 00:04:43.416 SYMLINK libspdk_event_keyring.so 00:04:43.416 SYMLINK libspdk_event_vhost_blk.so 00:04:43.416 SYMLINK libspdk_event_scheduler.so 00:04:43.416 SYMLINK libspdk_event_iobuf.so 00:04:43.416 SYMLINK libspdk_event_vmd.so 00:04:44.025 CC module/event/subsystems/accel/accel.o 00:04:44.025 LIB libspdk_event_accel.a 00:04:44.025 SO libspdk_event_accel.so.6.0 00:04:44.336 SYMLINK libspdk_event_accel.so 00:04:44.596 CC module/event/subsystems/bdev/bdev.o 00:04:44.596 LIB libspdk_event_bdev.a 00:04:44.855 SO libspdk_event_bdev.so.6.0 00:04:44.855 SYMLINK libspdk_event_bdev.so 00:04:45.115 CC module/event/subsystems/ublk/ublk.o 00:04:45.115 CC module/event/subsystems/scsi/scsi.o 00:04:45.115 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:45.115 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:45.115 CC module/event/subsystems/nbd/nbd.o 00:04:45.373 LIB libspdk_event_ublk.a 00:04:45.373 LIB libspdk_event_scsi.a 00:04:45.373 LIB libspdk_event_nbd.a 00:04:45.373 SO libspdk_event_ublk.so.3.0 00:04:45.373 SO libspdk_event_scsi.so.6.0 00:04:45.373 SO libspdk_event_nbd.so.6.0 00:04:45.373 SYMLINK libspdk_event_ublk.so 00:04:45.373 LIB libspdk_event_nvmf.a 00:04:45.373 SYMLINK libspdk_event_nbd.so 00:04:45.373 SYMLINK libspdk_event_scsi.so 00:04:45.632 SO libspdk_event_nvmf.so.6.0 00:04:45.632 SYMLINK libspdk_event_nvmf.so 00:04:45.891 CC module/event/subsystems/iscsi/iscsi.o 00:04:45.891 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:45.891 LIB libspdk_event_vhost_scsi.a 00:04:46.150 LIB libspdk_event_iscsi.a 00:04:46.150 SO libspdk_event_vhost_scsi.so.3.0 00:04:46.150 SO libspdk_event_iscsi.so.6.0 00:04:46.150 SYMLINK libspdk_event_vhost_scsi.so 00:04:46.150 SYMLINK libspdk_event_iscsi.so 00:04:46.410 SO libspdk.so.6.0 00:04:46.410 SYMLINK libspdk.so 00:04:46.670 CC app/trace_record/trace_record.o 00:04:46.670 CC app/spdk_nvme_identify/identify.o 00:04:46.670 CXX app/trace/trace.o 00:04:46.670 CC app/spdk_lspci/spdk_lspci.o 00:04:46.670 CC app/spdk_top/spdk_top.o 00:04:46.670 TEST_HEADER include/spdk/accel_module.h 00:04:46.670 TEST_HEADER include/spdk/accel.h 00:04:46.670 TEST_HEADER include/spdk/assert.h 00:04:46.670 TEST_HEADER include/spdk/barrier.h 00:04:46.670 TEST_HEADER include/spdk/base64.h 00:04:46.670 CC test/rpc_client/rpc_client_test.o 00:04:46.670 TEST_HEADER include/spdk/bdev.h 00:04:46.670 TEST_HEADER include/spdk/bdev_module.h 00:04:46.670 TEST_HEADER include/spdk/bdev_zone.h 00:04:46.670 TEST_HEADER include/spdk/bit_array.h 00:04:46.670 CC app/spdk_nvme_perf/perf.o 00:04:46.670 CC app/spdk_nvme_discover/discovery_aer.o 00:04:46.670 TEST_HEADER include/spdk/bit_pool.h 00:04:46.670 TEST_HEADER include/spdk/blob_bdev.h 00:04:46.670 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:46.670 TEST_HEADER include/spdk/blobfs.h 00:04:46.670 TEST_HEADER include/spdk/conf.h 00:04:46.670 TEST_HEADER include/spdk/blob.h 00:04:46.670 TEST_HEADER include/spdk/config.h 00:04:46.670 TEST_HEADER include/spdk/cpuset.h 00:04:46.670 TEST_HEADER include/spdk/crc32.h 00:04:46.670 TEST_HEADER include/spdk/crc16.h 00:04:46.670 TEST_HEADER include/spdk/crc64.h 00:04:46.670 TEST_HEADER include/spdk/dif.h 00:04:46.670 TEST_HEADER include/spdk/dma.h 00:04:46.670 TEST_HEADER include/spdk/endian.h 00:04:46.670 TEST_HEADER include/spdk/env.h 00:04:46.670 TEST_HEADER include/spdk/env_dpdk.h 00:04:46.670 TEST_HEADER include/spdk/event.h 00:04:46.670 TEST_HEADER include/spdk/fd_group.h 00:04:46.670 TEST_HEADER include/spdk/fd.h 00:04:46.670 TEST_HEADER include/spdk/file.h 00:04:46.670 TEST_HEADER include/spdk/ftl.h 00:04:46.670 TEST_HEADER include/spdk/hexlify.h 00:04:46.670 TEST_HEADER include/spdk/gpt_spec.h 00:04:46.670 TEST_HEADER include/spdk/histogram_data.h 00:04:46.670 TEST_HEADER include/spdk/idxd.h 00:04:46.670 TEST_HEADER include/spdk/init.h 00:04:46.670 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:46.670 TEST_HEADER include/spdk/idxd_spec.h 00:04:46.670 TEST_HEADER include/spdk/ioat_spec.h 00:04:46.670 TEST_HEADER include/spdk/ioat.h 00:04:46.670 TEST_HEADER include/spdk/iscsi_spec.h 00:04:46.670 TEST_HEADER include/spdk/json.h 00:04:46.670 TEST_HEADER include/spdk/keyring.h 00:04:46.670 TEST_HEADER include/spdk/jsonrpc.h 00:04:46.670 TEST_HEADER include/spdk/likely.h 00:04:46.670 TEST_HEADER include/spdk/keyring_module.h 00:04:46.670 TEST_HEADER include/spdk/log.h 00:04:46.670 TEST_HEADER include/spdk/memory.h 00:04:46.670 TEST_HEADER include/spdk/lvol.h 00:04:46.670 TEST_HEADER include/spdk/mmio.h 00:04:46.670 TEST_HEADER include/spdk/nbd.h 00:04:46.670 TEST_HEADER include/spdk/notify.h 00:04:46.671 TEST_HEADER include/spdk/nvme.h 00:04:46.671 TEST_HEADER include/spdk/nvme_intel.h 00:04:46.671 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:46.671 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:46.671 TEST_HEADER include/spdk/nvme_spec.h 00:04:46.671 TEST_HEADER include/spdk/nvme_zns.h 00:04:46.671 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:46.671 CC app/nvmf_tgt/nvmf_main.o 00:04:46.671 TEST_HEADER include/spdk/nvmf.h 00:04:46.671 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:46.671 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.671 TEST_HEADER include/spdk/nvmf_transport.h 00:04:46.671 TEST_HEADER include/spdk/opal.h 00:04:46.671 TEST_HEADER include/spdk/opal_spec.h 00:04:46.671 CC app/spdk_dd/spdk_dd.o 00:04:46.671 TEST_HEADER include/spdk/pci_ids.h 00:04:46.934 TEST_HEADER include/spdk/pipe.h 00:04:46.934 TEST_HEADER include/spdk/queue.h 00:04:46.934 TEST_HEADER include/spdk/reduce.h 00:04:46.934 TEST_HEADER include/spdk/rpc.h 00:04:46.934 TEST_HEADER include/spdk/scheduler.h 00:04:46.934 CC app/iscsi_tgt/iscsi_tgt.o 00:04:46.934 TEST_HEADER include/spdk/scsi.h 00:04:46.934 TEST_HEADER include/spdk/scsi_spec.h 00:04:46.934 TEST_HEADER include/spdk/sock.h 00:04:46.934 TEST_HEADER include/spdk/stdinc.h 00:04:46.934 TEST_HEADER include/spdk/string.h 00:04:46.934 TEST_HEADER include/spdk/thread.h 00:04:46.934 TEST_HEADER include/spdk/trace.h 00:04:46.934 TEST_HEADER include/spdk/trace_parser.h 00:04:46.934 TEST_HEADER include/spdk/tree.h 00:04:46.934 TEST_HEADER include/spdk/ublk.h 00:04:46.934 TEST_HEADER include/spdk/util.h 00:04:46.934 TEST_HEADER include/spdk/uuid.h 00:04:46.934 TEST_HEADER include/spdk/version.h 00:04:46.934 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:46.934 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:46.934 TEST_HEADER include/spdk/vmd.h 00:04:46.934 TEST_HEADER include/spdk/vhost.h 00:04:46.934 CC app/spdk_tgt/spdk_tgt.o 00:04:46.934 TEST_HEADER include/spdk/xor.h 00:04:46.934 TEST_HEADER include/spdk/zipf.h 00:04:46.934 CXX test/cpp_headers/accel.o 00:04:46.934 CXX test/cpp_headers/accel_module.o 00:04:46.934 CXX test/cpp_headers/assert.o 00:04:46.934 CXX test/cpp_headers/barrier.o 00:04:46.934 CXX test/cpp_headers/bdev.o 00:04:46.934 CXX test/cpp_headers/base64.o 00:04:46.934 CXX test/cpp_headers/bdev_module.o 00:04:46.934 CXX test/cpp_headers/bit_array.o 00:04:46.934 CXX test/cpp_headers/bdev_zone.o 00:04:46.934 CXX test/cpp_headers/bit_pool.o 00:04:46.934 CXX test/cpp_headers/blob_bdev.o 00:04:46.934 CXX test/cpp_headers/blobfs_bdev.o 00:04:46.934 CC app/vhost/vhost.o 00:04:46.934 CXX test/cpp_headers/blobfs.o 00:04:46.934 CXX test/cpp_headers/blob.o 00:04:46.934 CXX test/cpp_headers/conf.o 00:04:46.934 CXX test/cpp_headers/config.o 00:04:46.934 CXX test/cpp_headers/cpuset.o 00:04:46.934 CXX test/cpp_headers/crc16.o 00:04:46.935 CXX test/cpp_headers/crc32.o 00:04:46.935 CXX test/cpp_headers/crc64.o 00:04:46.935 CXX test/cpp_headers/dif.o 00:04:46.935 CXX test/cpp_headers/dma.o 00:04:46.935 CXX test/cpp_headers/endian.o 00:04:46.935 CXX test/cpp_headers/env_dpdk.o 00:04:46.935 CXX test/cpp_headers/env.o 00:04:46.935 CXX test/cpp_headers/event.o 00:04:46.935 CXX test/cpp_headers/fd_group.o 00:04:46.935 CXX test/cpp_headers/fd.o 00:04:46.935 CXX test/cpp_headers/file.o 00:04:46.935 CXX test/cpp_headers/ftl.o 00:04:46.935 CXX test/cpp_headers/gpt_spec.o 00:04:46.935 CXX test/cpp_headers/hexlify.o 00:04:46.935 CXX test/cpp_headers/histogram_data.o 00:04:46.935 CXX test/cpp_headers/idxd.o 00:04:46.935 CXX test/cpp_headers/idxd_spec.o 00:04:46.935 CXX test/cpp_headers/init.o 00:04:46.935 CXX test/cpp_headers/ioat.o 00:04:46.935 CXX test/cpp_headers/ioat_spec.o 00:04:46.935 CC examples/idxd/perf/perf.o 00:04:46.935 CC examples/ioat/verify/verify.o 00:04:46.935 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:46.935 CC examples/nvme/hello_world/hello_world.o 00:04:46.935 CC examples/util/zipf/zipf.o 00:04:46.935 CC examples/sock/hello_world/hello_sock.o 00:04:46.935 CC examples/ioat/perf/perf.o 00:04:46.935 CC examples/nvme/reconnect/reconnect.o 00:04:46.935 CC examples/nvme/arbitration/arbitration.o 00:04:46.935 CC examples/vmd/lsvmd/lsvmd.o 00:04:46.935 CC examples/nvme/abort/abort.o 00:04:46.935 CC examples/nvme/hotplug/hotplug.o 00:04:46.935 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.935 CC examples/vmd/led/led.o 00:04:46.935 CC test/nvme/sgl/sgl.o 00:04:46.935 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.935 CC test/nvme/aer/aer.o 00:04:46.935 CC test/event/reactor/reactor.o 00:04:46.935 CC test/event/event_perf/event_perf.o 00:04:46.935 CC test/nvme/reset/reset.o 00:04:46.935 CC test/event/reactor_perf/reactor_perf.o 00:04:46.935 CC test/nvme/boot_partition/boot_partition.o 00:04:46.935 CC examples/accel/perf/accel_perf.o 00:04:46.935 CC test/nvme/reserve/reserve.o 00:04:46.935 CC test/nvme/err_injection/err_injection.o 00:04:46.935 CC test/nvme/overhead/overhead.o 00:04:46.935 CC test/env/memory/memory_ut.o 00:04:46.935 CC app/fio/nvme/fio_plugin.o 00:04:46.935 CC test/app/jsoncat/jsoncat.o 00:04:46.935 CC test/nvme/connect_stress/connect_stress.o 00:04:46.935 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.935 CC test/nvme/compliance/nvme_compliance.o 00:04:46.935 CC test/thread/poller_perf/poller_perf.o 00:04:46.935 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.935 CC examples/blob/cli/blobcli.o 00:04:47.205 CC test/app/histogram_perf/histogram_perf.o 00:04:47.205 CC test/nvme/e2edp/nvme_dp.o 00:04:47.205 CC test/nvme/simple_copy/simple_copy.o 00:04:47.205 CC examples/blob/hello_world/hello_blob.o 00:04:47.205 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:47.205 CC test/nvme/startup/startup.o 00:04:47.205 CC test/app/stub/stub.o 00:04:47.205 CC test/nvme/fdp/fdp.o 00:04:47.205 CC test/nvme/cuse/cuse.o 00:04:47.205 CC test/env/vtophys/vtophys.o 00:04:47.205 CC examples/bdev/hello_world/hello_bdev.o 00:04:47.205 CC test/env/pci/pci_ut.o 00:04:47.205 CC examples/nvmf/nvmf/nvmf.o 00:04:47.205 CC examples/thread/thread/thread_ex.o 00:04:47.205 CC test/event/app_repeat/app_repeat.o 00:04:47.205 CC test/bdev/bdevio/bdevio.o 00:04:47.205 CC examples/bdev/bdevperf/bdevperf.o 00:04:47.205 CC test/blobfs/mkfs/mkfs.o 00:04:47.205 CC test/event/scheduler/scheduler.o 00:04:47.205 CC test/accel/dif/dif.o 00:04:47.205 CC test/dma/test_dma/test_dma.o 00:04:47.205 CC test/app/bdev_svc/bdev_svc.o 00:04:47.205 CC app/fio/bdev/fio_plugin.o 00:04:47.205 LINK spdk_lspci 00:04:47.473 LINK rpc_client_test 00:04:47.473 LINK interrupt_tgt 00:04:47.473 CC test/env/mem_callbacks/mem_callbacks.o 00:04:47.473 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:47.473 CC test/lvol/esnap/esnap.o 00:04:47.473 LINK nvmf_tgt 00:04:47.473 LINK lsvmd 00:04:47.473 LINK spdk_nvme_discover 00:04:47.737 LINK vhost 00:04:47.737 LINK iscsi_tgt 00:04:47.737 LINK pmr_persistence 00:04:47.737 LINK spdk_trace_record 00:04:47.737 LINK spdk_tgt 00:04:47.737 LINK boot_partition 00:04:47.737 CXX test/cpp_headers/json.o 00:04:47.737 CXX test/cpp_headers/iscsi_spec.o 00:04:47.737 LINK reactor 00:04:47.737 CXX test/cpp_headers/jsonrpc.o 00:04:47.737 LINK reactor_perf 00:04:47.737 LINK led 00:04:47.737 LINK jsoncat 00:04:47.737 CXX test/cpp_headers/keyring.o 00:04:47.737 LINK stub 00:04:47.737 LINK event_perf 00:04:47.737 LINK ioat_perf 00:04:47.737 CXX test/cpp_headers/keyring_module.o 00:04:47.737 CXX test/cpp_headers/likely.o 00:04:47.737 CXX test/cpp_headers/log.o 00:04:47.737 LINK histogram_perf 00:04:47.737 CXX test/cpp_headers/lvol.o 00:04:47.737 LINK zipf 00:04:47.737 CXX test/cpp_headers/memory.o 00:04:47.737 LINK poller_perf 00:04:47.737 CXX test/cpp_headers/mmio.o 00:04:47.737 CXX test/cpp_headers/nbd.o 00:04:47.737 LINK hello_world 00:04:47.737 LINK err_injection 00:04:47.737 LINK vtophys 00:04:47.737 CXX test/cpp_headers/notify.o 00:04:47.737 LINK sgl 00:04:47.737 CXX test/cpp_headers/nvme.o 00:04:47.737 CXX test/cpp_headers/nvme_intel.o 00:04:47.737 LINK cmb_copy 00:04:47.737 LINK connect_stress 00:04:47.737 CXX test/cpp_headers/nvme_ocssd.o 00:04:47.737 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:47.737 CXX test/cpp_headers/nvme_spec.o 00:04:47.737 CXX test/cpp_headers/nvme_zns.o 00:04:47.737 LINK startup 00:04:47.737 CXX test/cpp_headers/nvmf_cmd.o 00:04:47.737 LINK app_repeat 00:04:47.737 CXX test/cpp_headers/nvmf.o 00:04:47.737 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:47.737 CXX test/cpp_headers/nvmf_spec.o 00:04:47.737 LINK verify 00:04:47.737 LINK mkfs 00:04:47.737 CXX test/cpp_headers/nvmf_transport.o 00:04:47.737 CXX test/cpp_headers/opal.o 00:04:47.737 CXX test/cpp_headers/opal_spec.o 00:04:47.737 LINK doorbell_aers 00:04:47.737 CXX test/cpp_headers/pci_ids.o 00:04:47.737 CXX test/cpp_headers/pipe.o 00:04:47.737 CXX test/cpp_headers/queue.o 00:04:47.737 CXX test/cpp_headers/reduce.o 00:04:47.737 LINK fused_ordering 00:04:47.737 CXX test/cpp_headers/rpc.o 00:04:47.737 LINK env_dpdk_post_init 00:04:47.737 CXX test/cpp_headers/scheduler.o 00:04:47.737 CXX test/cpp_headers/scsi.o 00:04:47.737 CXX test/cpp_headers/scsi_spec.o 00:04:47.737 CXX test/cpp_headers/sock.o 00:04:47.737 LINK simple_copy 00:04:47.737 LINK reserve 00:04:47.737 CXX test/cpp_headers/stdinc.o 00:04:47.737 CXX test/cpp_headers/string.o 00:04:47.737 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:47.737 LINK hotplug 00:04:47.996 LINK bdev_svc 00:04:47.996 CXX test/cpp_headers/thread.o 00:04:47.996 LINK thread 00:04:47.996 LINK aer 00:04:47.996 LINK hello_blob 00:04:47.996 LINK nvme_dp 00:04:47.996 LINK hello_sock 00:04:47.996 LINK reconnect 00:04:47.996 LINK scheduler 00:04:47.996 CXX test/cpp_headers/trace.o 00:04:47.996 LINK hello_bdev 00:04:47.996 CXX test/cpp_headers/trace_parser.o 00:04:47.996 LINK abort 00:04:47.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:47.996 LINK arbitration 00:04:47.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:47.996 LINK reset 00:04:47.996 LINK fdp 00:04:47.996 LINK nvmf 00:04:47.996 LINK spdk_dd 00:04:47.996 LINK spdk_trace 00:04:47.996 LINK overhead 00:04:47.996 LINK idxd_perf 00:04:47.996 CXX test/cpp_headers/ublk.o 00:04:47.996 CXX test/cpp_headers/tree.o 00:04:47.996 LINK nvme_compliance 00:04:47.996 CXX test/cpp_headers/util.o 00:04:47.996 CXX test/cpp_headers/uuid.o 00:04:47.996 CXX test/cpp_headers/version.o 00:04:47.996 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.254 LINK pci_ut 00:04:48.254 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.254 CXX test/cpp_headers/vhost.o 00:04:48.254 CXX test/cpp_headers/vmd.o 00:04:48.254 LINK test_dma 00:04:48.254 CXX test/cpp_headers/xor.o 00:04:48.254 CXX test/cpp_headers/zipf.o 00:04:48.254 LINK nvme_manage 00:04:48.254 LINK bdevio 00:04:48.254 LINK accel_perf 00:04:48.254 LINK dif 00:04:48.254 LINK blobcli 00:04:48.514 LINK spdk_nvme 00:04:48.514 LINK spdk_bdev 00:04:48.514 LINK nvme_fuzz 00:04:48.514 LINK spdk_nvme_perf 00:04:48.773 LINK spdk_top 00:04:48.773 LINK mem_callbacks 00:04:48.773 LINK vhost_fuzz 00:04:48.773 LINK spdk_nvme_identify 00:04:48.773 LINK bdevperf 00:04:49.031 LINK memory_ut 00:04:49.031 LINK cuse 00:04:49.599 LINK iscsi_fuzz 00:04:52.890 LINK esnap 00:04:53.149 00:04:53.149 real 1m23.345s 00:04:53.149 user 15m38.870s 00:04:53.149 sys 5m54.031s 00:04:53.149 18:49:07 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:04:53.149 18:49:07 make -- common/autotest_common.sh@10 -- $ set +x 00:04:53.149 ************************************ 00:04:53.149 END TEST make 00:04:53.149 ************************************ 00:04:53.149 18:49:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:53.149 18:49:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:53.149 18:49:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:53.149 18:49:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.149 18:49:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:53.149 18:49:07 -- pm/common@44 -- $ pid=1433387 00:04:53.149 18:49:07 -- pm/common@50 -- $ kill -TERM 1433387 00:04:53.149 18:49:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.149 18:49:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:53.149 18:49:07 -- pm/common@44 -- $ pid=1433389 00:04:53.149 18:49:07 -- pm/common@50 -- $ kill -TERM 1433389 00:04:53.149 18:49:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.149 18:49:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:53.149 18:49:07 -- pm/common@44 -- $ pid=1433391 00:04:53.149 18:49:07 -- pm/common@50 -- $ kill -TERM 1433391 00:04:53.149 18:49:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.149 18:49:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:53.149 18:49:07 -- pm/common@44 -- $ pid=1433416 00:04:53.149 18:49:07 -- pm/common@50 -- $ sudo -E kill -TERM 1433416 00:04:53.408 18:49:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.408 18:49:07 -- nvmf/common.sh@7 -- # uname -s 00:04:53.408 18:49:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.408 18:49:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.408 18:49:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.408 18:49:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.408 18:49:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.408 18:49:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.408 18:49:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.408 18:49:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.408 18:49:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.408 18:49:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.408 18:49:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:53.408 18:49:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:53.408 18:49:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.408 18:49:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.408 18:49:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.408 18:49:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.408 18:49:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:04:53.408 18:49:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.408 18:49:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.408 18:49:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.408 18:49:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.408 18:49:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.408 18:49:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.408 18:49:07 -- paths/export.sh@5 -- # export PATH 00:04:53.408 18:49:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.408 18:49:07 -- nvmf/common.sh@47 -- # : 0 00:04:53.408 18:49:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.408 18:49:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.408 18:49:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.408 18:49:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.408 18:49:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.408 18:49:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.408 18:49:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.408 18:49:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.408 18:49:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:53.408 18:49:07 -- spdk/autotest.sh@32 -- # uname -s 00:04:53.408 18:49:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:53.408 18:49:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:53.408 18:49:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:04:53.408 18:49:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:53.408 18:49:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:04:53.408 18:49:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:53.408 18:49:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:53.408 18:49:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:53.408 18:49:07 -- spdk/autotest.sh@48 -- # udevadm_pid=1503948 00:04:53.408 18:49:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:53.408 18:49:07 -- pm/common@17 -- # local monitor 00:04:53.408 18:49:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:53.408 18:49:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.408 18:49:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.408 18:49:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.408 18:49:07 -- pm/common@21 -- # date +%s 00:04:53.408 18:49:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.408 18:49:07 -- pm/common@21 -- # date +%s 00:04:53.408 18:49:07 -- pm/common@21 -- # date +%s 00:04:53.408 18:49:07 -- pm/common@25 -- # sleep 1 00:04:53.408 18:49:07 -- pm/common@21 -- # date +%s 00:04:53.408 18:49:07 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718038147 00:04:53.408 18:49:07 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718038147 00:04:53.408 18:49:07 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718038147 00:04:53.408 18:49:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718038147 00:04:53.408 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718038147_collect-cpu-temp.pm.log 00:04:53.408 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718038147_collect-vmstat.pm.log 00:04:53.408 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718038147_collect-cpu-load.pm.log 00:04:53.408 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718038147_collect-bmc-pm.bmc.pm.log 00:04:54.344 18:49:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:54.344 18:49:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:54.344 18:49:08 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:54.344 18:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:54.344 18:49:09 -- spdk/autotest.sh@59 -- # create_test_list 00:04:54.344 18:49:09 -- common/autotest_common.sh@747 -- # xtrace_disable 00:04:54.344 18:49:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.344 18:49:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:04:54.344 18:49:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:04:54.344 18:49:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:04:54.344 18:49:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:04:54.344 18:49:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:04:54.344 18:49:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:54.344 18:49:09 -- common/autotest_common.sh@1454 -- # uname 00:04:54.344 18:49:09 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:04:54.344 18:49:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:54.344 18:49:09 -- common/autotest_common.sh@1474 -- # uname 00:04:54.344 18:49:09 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:04:54.344 18:49:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:54.344 18:49:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:54.344 18:49:09 -- spdk/autotest.sh@72 -- # hash lcov 00:04:54.344 18:49:09 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:54.344 18:49:09 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:54.344 --rc lcov_branch_coverage=1 00:04:54.344 --rc lcov_function_coverage=1 00:04:54.344 --rc genhtml_branch_coverage=1 00:04:54.344 --rc genhtml_function_coverage=1 00:04:54.344 --rc genhtml_legend=1 00:04:54.344 --rc geninfo_all_blocks=1 00:04:54.344 ' 00:04:54.344 18:49:09 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:54.344 --rc lcov_branch_coverage=1 00:04:54.344 --rc lcov_function_coverage=1 00:04:54.344 --rc genhtml_branch_coverage=1 00:04:54.344 --rc genhtml_function_coverage=1 00:04:54.344 --rc genhtml_legend=1 00:04:54.344 --rc geninfo_all_blocks=1 00:04:54.344 ' 00:04:54.344 18:49:09 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:54.344 --rc lcov_branch_coverage=1 00:04:54.344 --rc lcov_function_coverage=1 00:04:54.344 --rc genhtml_branch_coverage=1 00:04:54.344 --rc genhtml_function_coverage=1 00:04:54.344 --rc genhtml_legend=1 00:04:54.344 --rc geninfo_all_blocks=1 00:04:54.344 --no-external' 00:04:54.344 18:49:09 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:54.344 --rc lcov_branch_coverage=1 00:04:54.344 --rc lcov_function_coverage=1 00:04:54.344 --rc genhtml_branch_coverage=1 00:04:54.345 --rc genhtml_function_coverage=1 00:04:54.345 --rc genhtml_legend=1 00:04:54.345 --rc geninfo_all_blocks=1 00:04:54.345 --no-external' 00:04:54.345 18:49:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:54.604 lcov: LCOV version 1.14 00:04:54.604 18:49:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:05:09.484 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:09.484 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:24.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:24.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:24.626 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:24.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:24.627 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:24.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:24.627 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:24.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:24.627 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:24.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:24.627 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:24.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:24.627 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:24.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:24.627 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:24.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:24.887 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:24.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:25.147 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:25.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:27.051 18:49:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:27.051 18:49:41 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:27.051 18:49:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.051 18:49:41 -- spdk/autotest.sh@91 -- # rm -f 00:05:27.051 18:49:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:31.330 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:31.330 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:31.589 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:31.589 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:31.589 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:31.589 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:31.589 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:05:31.589 18:49:46 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:31.589 18:49:46 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:31.589 18:49:46 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:31.589 18:49:46 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:31.589 18:49:46 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:31.589 18:49:46 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:31.589 18:49:46 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:31.589 18:49:46 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.589 18:49:46 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:31.589 18:49:46 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:31.589 18:49:46 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.589 18:49:46 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:31.589 18:49:46 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:31.589 18:49:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:31.589 18:49:46 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.589 No valid GPT data, bailing 00:05:31.589 18:49:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.589 18:49:46 -- scripts/common.sh@391 -- # pt= 00:05:31.589 18:49:46 -- scripts/common.sh@392 -- # return 1 00:05:31.589 18:49:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.589 1+0 records in 00:05:31.589 1+0 records out 00:05:31.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639367 s, 164 MB/s 00:05:31.589 18:49:46 -- spdk/autotest.sh@118 -- # sync 00:05:31.589 18:49:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.589 18:49:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.589 18:49:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:39.709 18:49:53 -- spdk/autotest.sh@124 -- # uname -s 00:05:39.709 18:49:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:39.709 18:49:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:05:39.709 18:49:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:39.709 18:49:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.709 18:49:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.709 ************************************ 00:05:39.709 START TEST setup.sh 00:05:39.709 ************************************ 00:05:39.709 18:49:53 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:05:39.709 * Looking for test storage... 00:05:39.709 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:39.709 18:49:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:39.709 18:49:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:39.709 18:49:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:05:39.709 18:49:53 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:39.709 18:49:53 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.709 18:49:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.709 ************************************ 00:05:39.709 START TEST acl 00:05:39.709 ************************************ 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:05:39.709 * Looking for test storage... 00:05:39.709 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:39.709 18:49:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.709 18:49:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:39.709 18:49:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:39.709 18:49:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:39.709 18:49:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:39.709 18:49:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:39.709 18:49:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:39.709 18:49:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.709 18:49:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:43.895 18:49:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:43.895 18:49:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:43.895 18:49:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:43.895 18:49:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:43.895 18:49:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.895 18:49:57 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:05:47.180 Hugepages 00:05:47.180 node hugesize free / total 00:05:47.180 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:47.180 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:47.180 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 00:05:47.181 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.181 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:47.439 18:50:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:47.439 18:50:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:47.439 18:50:02 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.439 18:50:02 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.439 18:50:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:47.439 ************************************ 00:05:47.439 START TEST denied 00:05:47.439 ************************************ 00:05:47.439 18:50:02 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:05:47.439 18:50:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:05:47.439 18:50:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:47.439 18:50:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:05:47.439 18:50:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.439 18:50:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:52.719 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:52.719 18:50:06 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:58.044 00:05:58.044 real 0m9.835s 00:05:58.044 user 0m3.117s 00:05:58.044 sys 0m6.022s 00:05:58.044 18:50:11 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:58.044 18:50:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:58.044 ************************************ 00:05:58.044 END TEST denied 00:05:58.044 ************************************ 00:05:58.044 18:50:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:58.044 18:50:11 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:58.044 18:50:11 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:58.044 18:50:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:58.044 ************************************ 00:05:58.044 START TEST allowed 00:05:58.044 ************************************ 00:05:58.044 18:50:12 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:05:58.044 18:50:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:05:58.044 18:50:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:58.044 18:50:12 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:05:58.044 18:50:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.044 18:50:12 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:06:03.307 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:03.307 18:50:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:06:03.307 18:50:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:03.307 18:50:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:03.307 18:50:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:03.307 18:50:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:08.582 00:06:08.582 real 0m10.318s 00:06:08.582 user 0m2.950s 00:06:08.582 sys 0m5.934s 00:06:08.582 18:50:22 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.582 18:50:22 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:08.582 ************************************ 00:06:08.582 END TEST allowed 00:06:08.582 ************************************ 00:06:08.582 00:06:08.582 real 0m29.185s 00:06:08.582 user 0m9.225s 00:06:08.582 sys 0m18.144s 00:06:08.582 18:50:22 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.582 18:50:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:08.582 ************************************ 00:06:08.582 END TEST acl 00:06:08.582 ************************************ 00:06:08.582 18:50:22 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:06:08.582 18:50:22 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:08.582 18:50:22 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.582 18:50:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:08.582 ************************************ 00:06:08.582 START TEST hugepages 00:06:08.582 ************************************ 00:06:08.582 18:50:22 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:06:08.582 * Looking for test storage... 00:06:08.582 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38862116 kB' 'MemAvailable: 40731180 kB' 'Buffers: 2724 kB' 'Cached: 12891552 kB' 'SwapCached: 308 kB' 'Active: 10572748 kB' 'Inactive: 2946468 kB' 'Active(anon): 10127924 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627848 kB' 'Mapped: 167080 kB' 'Shmem: 10597064 kB' 'KReclaimable: 499048 kB' 'Slab: 1163012 kB' 'SReclaimable: 499048 kB' 'SUnreclaim: 663964 kB' 'KernelStack: 22368 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 12797564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219016 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.582 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.583 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:08.584 18:50:22 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:08.584 18:50:22 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:08.584 18:50:22 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.584 18:50:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:08.584 ************************************ 00:06:08.584 START TEST default_setup 00:06:08.584 ************************************ 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.584 18:50:22 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:11.924 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:11.924 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:11.925 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:12.183 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:12.183 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:12.183 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:12.183 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:13.558 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40977576 kB' 'MemAvailable: 42846608 kB' 'Buffers: 2724 kB' 'Cached: 12891692 kB' 'SwapCached: 308 kB' 'Active: 10588188 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143364 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643252 kB' 'Mapped: 167232 kB' 'Shmem: 10597204 kB' 'KReclaimable: 498984 kB' 'Slab: 1161968 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662984 kB' 'KernelStack: 22336 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12810912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.558 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.559 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.821 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.821 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.822 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40980128 kB' 'MemAvailable: 42849160 kB' 'Buffers: 2724 kB' 'Cached: 12891696 kB' 'SwapCached: 308 kB' 'Active: 10588748 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143924 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643896 kB' 'Mapped: 167220 kB' 'Shmem: 10597208 kB' 'KReclaimable: 498984 kB' 'Slab: 1162056 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 663072 kB' 'KernelStack: 22496 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12810928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219032 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.823 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.824 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40980708 kB' 'MemAvailable: 42849740 kB' 'Buffers: 2724 kB' 'Cached: 12891716 kB' 'SwapCached: 308 kB' 'Active: 10588440 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143616 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643468 kB' 'Mapped: 167220 kB' 'Shmem: 10597228 kB' 'KReclaimable: 498984 kB' 'Slab: 1162056 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 663072 kB' 'KernelStack: 22464 kB' 'PageTables: 9468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12810952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218984 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.825 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:13.826 nr_hugepages=1024 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:13.826 resv_hugepages=0 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:13.826 surplus_hugepages=0 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:13.826 anon_hugepages=0 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:13.826 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40982972 kB' 'MemAvailable: 42852004 kB' 'Buffers: 2724 kB' 'Cached: 12891736 kB' 'SwapCached: 308 kB' 'Active: 10588840 kB' 'Inactive: 2946468 kB' 'Active(anon): 10144016 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643836 kB' 'Mapped: 167220 kB' 'Shmem: 10597248 kB' 'KReclaimable: 498984 kB' 'Slab: 1161896 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662912 kB' 'KernelStack: 22576 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12810972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219032 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.827 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:13.828 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21448864 kB' 'MemUsed: 11190276 kB' 'SwapCached: 296 kB' 'Active: 6410556 kB' 'Inactive: 986232 kB' 'Active(anon): 6117428 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969492 kB' 'Mapped: 113252 kB' 'AnonPages: 430428 kB' 'Shmem: 6489816 kB' 'KernelStack: 13960 kB' 'PageTables: 6736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176116 kB' 'Slab: 503872 kB' 'SReclaimable: 176116 kB' 'SUnreclaim: 327756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.829 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:13.830 node0=1024 expecting 1024 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:13.830 00:06:13.830 real 0m5.755s 00:06:13.830 user 0m1.498s 00:06:13.830 sys 0m2.821s 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.830 18:50:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:13.830 ************************************ 00:06:13.830 END TEST default_setup 00:06:13.830 ************************************ 00:06:13.830 18:50:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:13.830 18:50:28 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:13.830 18:50:28 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.830 18:50:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:13.830 ************************************ 00:06:13.830 START TEST per_node_1G_alloc 00:06:13.830 ************************************ 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.830 18:50:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:18.029 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:18.029 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40987264 kB' 'MemAvailable: 42856296 kB' 'Buffers: 2724 kB' 'Cached: 12891852 kB' 'SwapCached: 308 kB' 'Active: 10587812 kB' 'Inactive: 2946468 kB' 'Active(anon): 10142988 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642616 kB' 'Mapped: 166388 kB' 'Shmem: 10597364 kB' 'KReclaimable: 498984 kB' 'Slab: 1161556 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662572 kB' 'KernelStack: 22320 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12801720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.029 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.030 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40988424 kB' 'MemAvailable: 42857456 kB' 'Buffers: 2724 kB' 'Cached: 12891856 kB' 'SwapCached: 308 kB' 'Active: 10587884 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143060 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642760 kB' 'Mapped: 166344 kB' 'Shmem: 10597368 kB' 'KReclaimable: 498984 kB' 'Slab: 1161544 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662560 kB' 'KernelStack: 22304 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12801736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.031 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.032 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40989056 kB' 'MemAvailable: 42858088 kB' 'Buffers: 2724 kB' 'Cached: 12891876 kB' 'SwapCached: 308 kB' 'Active: 10587912 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143088 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642764 kB' 'Mapped: 166344 kB' 'Shmem: 10597388 kB' 'KReclaimable: 498984 kB' 'Slab: 1161544 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662560 kB' 'KernelStack: 22304 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12801760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.033 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.034 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:18.035 nr_hugepages=1024 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:18.035 resv_hugepages=0 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:18.035 surplus_hugepages=0 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:18.035 anon_hugepages=0 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.035 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40989308 kB' 'MemAvailable: 42858340 kB' 'Buffers: 2724 kB' 'Cached: 12891896 kB' 'SwapCached: 308 kB' 'Active: 10587956 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143132 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642760 kB' 'Mapped: 166344 kB' 'Shmem: 10597408 kB' 'KReclaimable: 498984 kB' 'Slab: 1161544 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662560 kB' 'KernelStack: 22304 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12801784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.036 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.037 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22498944 kB' 'MemUsed: 10140196 kB' 'SwapCached: 296 kB' 'Active: 6411496 kB' 'Inactive: 986232 kB' 'Active(anon): 6118368 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969548 kB' 'Mapped: 112404 kB' 'AnonPages: 431336 kB' 'Shmem: 6489872 kB' 'KernelStack: 13720 kB' 'PageTables: 5836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176116 kB' 'Slab: 503604 kB' 'SReclaimable: 176116 kB' 'SUnreclaim: 327488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.038 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18491156 kB' 'MemUsed: 9164900 kB' 'SwapCached: 12 kB' 'Active: 4176124 kB' 'Inactive: 1960236 kB' 'Active(anon): 4024428 kB' 'Inactive(anon): 294100 kB' 'Active(file): 151696 kB' 'Inactive(file): 1666136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5925440 kB' 'Mapped: 53940 kB' 'AnonPages: 211020 kB' 'Shmem: 4107596 kB' 'KernelStack: 8568 kB' 'PageTables: 2680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 657940 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 335072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.039 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.040 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:18.041 node0=512 expecting 512 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:18.041 node1=512 expecting 512 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:18.041 00:06:18.041 real 0m4.104s 00:06:18.041 user 0m1.483s 00:06:18.041 sys 0m2.673s 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.041 18:50:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:18.041 ************************************ 00:06:18.041 END TEST per_node_1G_alloc 00:06:18.041 ************************************ 00:06:18.041 18:50:32 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:18.041 18:50:32 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:18.041 18:50:32 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.041 18:50:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:18.041 ************************************ 00:06:18.041 START TEST even_2G_alloc 00:06:18.041 ************************************ 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.041 18:50:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:22.227 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:22.227 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:22.227 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40971092 kB' 'MemAvailable: 42840124 kB' 'Buffers: 2724 kB' 'Cached: 12892020 kB' 'SwapCached: 308 kB' 'Active: 10588624 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143800 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643344 kB' 'Mapped: 166400 kB' 'Shmem: 10597532 kB' 'KReclaimable: 498984 kB' 'Slab: 1161436 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662452 kB' 'KernelStack: 22272 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12802340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219016 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.228 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40971612 kB' 'MemAvailable: 42840644 kB' 'Buffers: 2724 kB' 'Cached: 12892032 kB' 'SwapCached: 308 kB' 'Active: 10588804 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143980 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643540 kB' 'Mapped: 166356 kB' 'Shmem: 10597544 kB' 'KReclaimable: 498984 kB' 'Slab: 1161464 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662480 kB' 'KernelStack: 22288 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12802724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.229 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.230 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40972116 kB' 'MemAvailable: 42841148 kB' 'Buffers: 2724 kB' 'Cached: 12892032 kB' 'SwapCached: 308 kB' 'Active: 10588844 kB' 'Inactive: 2946468 kB' 'Active(anon): 10144020 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643576 kB' 'Mapped: 166356 kB' 'Shmem: 10597544 kB' 'KReclaimable: 498984 kB' 'Slab: 1161464 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662480 kB' 'KernelStack: 22304 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12802748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.231 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.232 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.233 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.494 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:22.495 nr_hugepages=1024 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:22.495 resv_hugepages=0 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:22.495 surplus_hugepages=0 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:22.495 anon_hugepages=0 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40972756 kB' 'MemAvailable: 42841788 kB' 'Buffers: 2724 kB' 'Cached: 12892092 kB' 'SwapCached: 308 kB' 'Active: 10588468 kB' 'Inactive: 2946468 kB' 'Active(anon): 10143644 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643144 kB' 'Mapped: 166356 kB' 'Shmem: 10597604 kB' 'KReclaimable: 498984 kB' 'Slab: 1161464 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662480 kB' 'KernelStack: 22272 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12802768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.495 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.496 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22526876 kB' 'MemUsed: 10112264 kB' 'SwapCached: 296 kB' 'Active: 6410768 kB' 'Inactive: 986232 kB' 'Active(anon): 6117640 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969572 kB' 'Mapped: 112416 kB' 'AnonPages: 430648 kB' 'Shmem: 6489896 kB' 'KernelStack: 13688 kB' 'PageTables: 5836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176116 kB' 'Slab: 503568 kB' 'SReclaimable: 176116 kB' 'SUnreclaim: 327452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.497 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18446420 kB' 'MemUsed: 9209636 kB' 'SwapCached: 12 kB' 'Active: 4178028 kB' 'Inactive: 1960236 kB' 'Active(anon): 4026332 kB' 'Inactive(anon): 294100 kB' 'Active(file): 151696 kB' 'Inactive(file): 1666136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5925576 kB' 'Mapped: 53940 kB' 'AnonPages: 212896 kB' 'Shmem: 4107732 kB' 'KernelStack: 8600 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 657896 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 335028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.498 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.499 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:22.500 node0=512 expecting 512 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:22.500 node1=512 expecting 512 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:22.500 00:06:22.500 real 0m4.349s 00:06:22.500 user 0m1.583s 00:06:22.500 sys 0m2.842s 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.500 18:50:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:22.500 ************************************ 00:06:22.500 END TEST even_2G_alloc 00:06:22.500 ************************************ 00:06:22.500 18:50:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:22.500 18:50:37 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:22.500 18:50:37 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.500 18:50:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:22.500 ************************************ 00:06:22.500 START TEST odd_alloc 00:06:22.500 ************************************ 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:22.500 18:50:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:26.688 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:26.688 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:26.688 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:26.688 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:26.688 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:26.688 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:26.689 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40976144 kB' 'MemAvailable: 42845176 kB' 'Buffers: 2724 kB' 'Cached: 12892196 kB' 'SwapCached: 308 kB' 'Active: 10591132 kB' 'Inactive: 2946468 kB' 'Active(anon): 10146308 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645696 kB' 'Mapped: 166368 kB' 'Shmem: 10597708 kB' 'KReclaimable: 498984 kB' 'Slab: 1161436 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662452 kB' 'KernelStack: 22384 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12804636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219128 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.689 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40977096 kB' 'MemAvailable: 42846128 kB' 'Buffers: 2724 kB' 'Cached: 12892196 kB' 'SwapCached: 308 kB' 'Active: 10590720 kB' 'Inactive: 2946468 kB' 'Active(anon): 10145896 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645200 kB' 'Mapped: 166360 kB' 'Shmem: 10597708 kB' 'KReclaimable: 498984 kB' 'Slab: 1161412 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662428 kB' 'KernelStack: 22400 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12806268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219112 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.690 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.691 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40976088 kB' 'MemAvailable: 42845624 kB' 'Buffers: 2724 kB' 'Cached: 12892216 kB' 'SwapCached: 308 kB' 'Active: 10590760 kB' 'Inactive: 2946468 kB' 'Active(anon): 10145936 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645172 kB' 'Mapped: 166360 kB' 'Shmem: 10597728 kB' 'KReclaimable: 498984 kB' 'Slab: 1161412 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662428 kB' 'KernelStack: 22432 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12806288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219128 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.692 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.693 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:26.694 nr_hugepages=1025 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:26.694 resv_hugepages=0 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:26.694 surplus_hugepages=0 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:26.694 anon_hugepages=0 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40975640 kB' 'MemAvailable: 42844672 kB' 'Buffers: 2724 kB' 'Cached: 12892236 kB' 'SwapCached: 308 kB' 'Active: 10591112 kB' 'Inactive: 2946468 kB' 'Active(anon): 10146288 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645508 kB' 'Mapped: 166360 kB' 'Shmem: 10597748 kB' 'KReclaimable: 498984 kB' 'Slab: 1161348 kB' 'SReclaimable: 498984 kB' 'SUnreclaim: 662364 kB' 'KernelStack: 22544 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12806308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219208 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.694 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.695 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22538792 kB' 'MemUsed: 10100348 kB' 'SwapCached: 296 kB' 'Active: 6412432 kB' 'Inactive: 986232 kB' 'Active(anon): 6119304 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969604 kB' 'Mapped: 112428 kB' 'AnonPages: 432160 kB' 'Shmem: 6489928 kB' 'KernelStack: 13784 kB' 'PageTables: 6340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176116 kB' 'Slab: 503068 kB' 'SReclaimable: 176116 kB' 'SUnreclaim: 326952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.696 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.697 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18434836 kB' 'MemUsed: 9221220 kB' 'SwapCached: 12 kB' 'Active: 4179200 kB' 'Inactive: 1960236 kB' 'Active(anon): 4027504 kB' 'Inactive(anon): 294100 kB' 'Active(file): 151696 kB' 'Inactive(file): 1666136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5925668 kB' 'Mapped: 53940 kB' 'AnonPages: 213856 kB' 'Shmem: 4107824 kB' 'KernelStack: 8680 kB' 'PageTables: 3148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 658280 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 335412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.698 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:26.699 node0=512 expecting 513 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:26.699 node1=513 expecting 512 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:26.699 00:06:26.699 real 0m4.137s 00:06:26.699 user 0m1.508s 00:06:26.699 sys 0m2.676s 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.699 18:50:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:26.699 ************************************ 00:06:26.699 END TEST odd_alloc 00:06:26.699 ************************************ 00:06:26.699 18:50:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:26.699 18:50:41 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:26.699 18:50:41 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.699 18:50:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:26.699 ************************************ 00:06:26.699 START TEST custom_alloc 00:06:26.699 ************************************ 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:26.699 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:26.700 18:50:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:30.889 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:30.889 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:30.889 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:30.889 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:30.889 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:30.889 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:30.889 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 39949480 kB' 'MemAvailable: 41818496 kB' 'Buffers: 2724 kB' 'Cached: 12892348 kB' 'SwapCached: 308 kB' 'Active: 10591560 kB' 'Inactive: 2946468 kB' 'Active(anon): 10146736 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645396 kB' 'Mapped: 166488 kB' 'Shmem: 10597860 kB' 'KReclaimable: 498952 kB' 'Slab: 1161428 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662476 kB' 'KernelStack: 22240 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12806620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.890 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.891 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 39949024 kB' 'MemAvailable: 41818040 kB' 'Buffers: 2724 kB' 'Cached: 12892352 kB' 'SwapCached: 308 kB' 'Active: 10590992 kB' 'Inactive: 2946468 kB' 'Active(anon): 10146168 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645404 kB' 'Mapped: 166464 kB' 'Shmem: 10597864 kB' 'KReclaimable: 498952 kB' 'Slab: 1161412 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662460 kB' 'KernelStack: 22192 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12804000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218840 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.892 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:30.893 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 39949032 kB' 'MemAvailable: 41818048 kB' 'Buffers: 2724 kB' 'Cached: 12892372 kB' 'SwapCached: 308 kB' 'Active: 10591480 kB' 'Inactive: 2946468 kB' 'Active(anon): 10146656 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645292 kB' 'Mapped: 166464 kB' 'Shmem: 10597884 kB' 'KReclaimable: 498952 kB' 'Slab: 1161412 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662460 kB' 'KernelStack: 22208 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12804024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218856 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.894 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.895 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:30.896 nr_hugepages=1536 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:30.896 resv_hugepages=0 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:30.896 surplus_hugepages=0 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:30.896 anon_hugepages=0 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 39949284 kB' 'MemAvailable: 41818300 kB' 'Buffers: 2724 kB' 'Cached: 12892376 kB' 'SwapCached: 308 kB' 'Active: 10591152 kB' 'Inactive: 2946468 kB' 'Active(anon): 10146328 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644960 kB' 'Mapped: 166464 kB' 'Shmem: 10597888 kB' 'KReclaimable: 498952 kB' 'Slab: 1161412 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662460 kB' 'KernelStack: 22192 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12804048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218856 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.896 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:30.897 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22544640 kB' 'MemUsed: 10094500 kB' 'SwapCached: 296 kB' 'Active: 6413180 kB' 'Inactive: 986232 kB' 'Active(anon): 6120052 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969732 kB' 'Mapped: 112444 kB' 'AnonPages: 432844 kB' 'Shmem: 6490056 kB' 'KernelStack: 13640 kB' 'PageTables: 5624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176084 kB' 'Slab: 503060 kB' 'SReclaimable: 176084 kB' 'SUnreclaim: 326976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.898 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 17404696 kB' 'MemUsed: 10251360 kB' 'SwapCached: 12 kB' 'Active: 4178288 kB' 'Inactive: 1960236 kB' 'Active(anon): 4026592 kB' 'Inactive(anon): 294100 kB' 'Active(file): 151696 kB' 'Inactive(file): 1666136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5925712 kB' 'Mapped: 54020 kB' 'AnonPages: 212444 kB' 'Shmem: 4107868 kB' 'KernelStack: 8568 kB' 'PageTables: 2748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 658352 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 335484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.899 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:30.900 node0=512 expecting 512 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:30.900 node1=1024 expecting 1024 00:06:30.900 18:50:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:30.900 00:06:30.900 real 0m3.993s 00:06:30.900 user 0m1.416s 00:06:30.901 sys 0m2.520s 00:06:30.901 18:50:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.901 18:50:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:30.901 ************************************ 00:06:30.901 END TEST custom_alloc 00:06:30.901 ************************************ 00:06:30.901 18:50:45 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:30.901 18:50:45 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.901 18:50:45 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.901 18:50:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:30.901 ************************************ 00:06:30.901 START TEST no_shrink_alloc 00:06:30.901 ************************************ 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.901 18:50:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:35.092 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:35.092 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:35.092 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:35.092 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:35.092 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:35.092 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:35.093 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40975408 kB' 'MemAvailable: 42844424 kB' 'Buffers: 2724 kB' 'Cached: 12892524 kB' 'SwapCached: 308 kB' 'Active: 10592348 kB' 'Inactive: 2946468 kB' 'Active(anon): 10147524 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646516 kB' 'Mapped: 166452 kB' 'Shmem: 10598036 kB' 'KReclaimable: 498952 kB' 'Slab: 1160684 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 661732 kB' 'KernelStack: 22288 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12806276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.093 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40972520 kB' 'MemAvailable: 42841536 kB' 'Buffers: 2724 kB' 'Cached: 12892528 kB' 'SwapCached: 308 kB' 'Active: 10595732 kB' 'Inactive: 2946468 kB' 'Active(anon): 10150908 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650396 kB' 'Mapped: 166956 kB' 'Shmem: 10598040 kB' 'KReclaimable: 498952 kB' 'Slab: 1160732 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 661780 kB' 'KernelStack: 22288 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12809332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218968 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.094 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.095 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40968496 kB' 'MemAvailable: 42837512 kB' 'Buffers: 2724 kB' 'Cached: 12892544 kB' 'SwapCached: 308 kB' 'Active: 10592576 kB' 'Inactive: 2946468 kB' 'Active(anon): 10147752 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646708 kB' 'Mapped: 166452 kB' 'Shmem: 10598056 kB' 'KReclaimable: 498952 kB' 'Slab: 1160748 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 661796 kB' 'KernelStack: 22272 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12805356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.096 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.097 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:35.098 nr_hugepages=1024 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:35.098 resv_hugepages=0 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:35.098 surplus_hugepages=0 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:35.098 anon_hugepages=0 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.098 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40969348 kB' 'MemAvailable: 42838364 kB' 'Buffers: 2724 kB' 'Cached: 12892568 kB' 'SwapCached: 308 kB' 'Active: 10592720 kB' 'Inactive: 2946468 kB' 'Active(anon): 10147896 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646828 kB' 'Mapped: 166452 kB' 'Shmem: 10598080 kB' 'KReclaimable: 498952 kB' 'Slab: 1160748 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 661796 kB' 'KernelStack: 22288 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12805380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.099 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21491436 kB' 'MemUsed: 11147704 kB' 'SwapCached: 296 kB' 'Active: 6411696 kB' 'Inactive: 986232 kB' 'Active(anon): 6118568 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969796 kB' 'Mapped: 112456 kB' 'AnonPages: 431280 kB' 'Shmem: 6490120 kB' 'KernelStack: 13672 kB' 'PageTables: 5796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176084 kB' 'Slab: 502512 kB' 'SReclaimable: 176084 kB' 'SUnreclaim: 326428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.100 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.101 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:35.102 node0=1024 expecting 1024 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:35.102 18:50:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:39.298 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:39.298 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:39.298 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.298 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41000628 kB' 'MemAvailable: 42869644 kB' 'Buffers: 2724 kB' 'Cached: 12892676 kB' 'SwapCached: 308 kB' 'Active: 10594180 kB' 'Inactive: 2946468 kB' 'Active(anon): 10149356 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647572 kB' 'Mapped: 166384 kB' 'Shmem: 10598188 kB' 'KReclaimable: 498952 kB' 'Slab: 1161072 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662120 kB' 'KernelStack: 22288 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12805748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218984 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.299 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41000560 kB' 'MemAvailable: 42869576 kB' 'Buffers: 2724 kB' 'Cached: 12892680 kB' 'SwapCached: 308 kB' 'Active: 10593904 kB' 'Inactive: 2946468 kB' 'Active(anon): 10149080 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647396 kB' 'Mapped: 166416 kB' 'Shmem: 10598192 kB' 'KReclaimable: 498952 kB' 'Slab: 1161032 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662080 kB' 'KernelStack: 22320 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12805772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.300 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.301 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41004908 kB' 'MemAvailable: 42873924 kB' 'Buffers: 2724 kB' 'Cached: 12892708 kB' 'SwapCached: 308 kB' 'Active: 10594140 kB' 'Inactive: 2946468 kB' 'Active(anon): 10149316 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647648 kB' 'Mapped: 166416 kB' 'Shmem: 10598220 kB' 'KReclaimable: 498952 kB' 'Slab: 1161028 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662076 kB' 'KernelStack: 22368 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12806288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.302 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.303 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:39.304 nr_hugepages=1024 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:39.304 resv_hugepages=0 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:39.304 surplus_hugepages=0 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:39.304 anon_hugepages=0 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41005516 kB' 'MemAvailable: 42874532 kB' 'Buffers: 2724 kB' 'Cached: 12892748 kB' 'SwapCached: 308 kB' 'Active: 10593836 kB' 'Inactive: 2946468 kB' 'Active(anon): 10149012 kB' 'Inactive(anon): 1094080 kB' 'Active(file): 444824 kB' 'Inactive(file): 1852388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8283900 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647312 kB' 'Mapped: 166416 kB' 'Shmem: 10598260 kB' 'KReclaimable: 498952 kB' 'Slab: 1161028 kB' 'SReclaimable: 498952 kB' 'SUnreclaim: 662076 kB' 'KernelStack: 22368 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12806312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4361588 kB' 'DirectMap2M: 57190400 kB' 'DirectMap1G: 7340032 kB' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.304 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.305 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21507756 kB' 'MemUsed: 11131384 kB' 'SwapCached: 296 kB' 'Active: 6414316 kB' 'Inactive: 986232 kB' 'Active(anon): 6121188 kB' 'Inactive(anon): 799980 kB' 'Active(file): 293128 kB' 'Inactive(file): 186252 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6969840 kB' 'Mapped: 112460 kB' 'AnonPages: 433876 kB' 'Shmem: 6490164 kB' 'KernelStack: 13784 kB' 'PageTables: 6556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176084 kB' 'Slab: 502880 kB' 'SReclaimable: 176084 kB' 'SUnreclaim: 326796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.306 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.307 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:39.308 node0=1024 expecting 1024 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:39.308 00:06:39.308 real 0m8.449s 00:06:39.308 user 0m3.047s 00:06:39.308 sys 0m5.540s 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.308 18:50:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:39.308 ************************************ 00:06:39.308 END TEST no_shrink_alloc 00:06:39.308 ************************************ 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:39.308 18:50:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:39.308 00:06:39.308 real 0m31.461s 00:06:39.308 user 0m10.784s 00:06:39.308 sys 0m19.547s 00:06:39.308 18:50:53 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.308 18:50:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:39.308 ************************************ 00:06:39.308 END TEST hugepages 00:06:39.308 ************************************ 00:06:39.308 18:50:53 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:06:39.308 18:50:53 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:39.308 18:50:53 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.308 18:50:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:39.308 ************************************ 00:06:39.308 START TEST driver 00:06:39.308 ************************************ 00:06:39.308 18:50:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:06:39.566 * Looking for test storage... 00:06:39.566 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:39.566 18:50:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:39.566 18:50:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:39.566 18:50:54 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:46.153 18:50:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:46.153 18:50:59 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.153 18:50:59 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.153 18:50:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:46.154 ************************************ 00:06:46.154 START TEST guess_driver 00:06:46.154 ************************************ 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:46.154 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:46.154 Looking for driver=vfio-pci 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.154 18:50:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:49.439 18:51:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:50.816 18:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:50.816 18:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:50.816 18:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:51.075 18:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:51.075 18:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:51.075 18:51:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:51.075 18:51:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:56.346 00:06:56.346 real 0m11.134s 00:06:56.346 user 0m2.939s 00:06:56.346 sys 0m5.897s 00:06:56.347 18:51:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.347 18:51:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:56.347 ************************************ 00:06:56.347 END TEST guess_driver 00:06:56.347 ************************************ 00:06:56.347 00:06:56.347 real 0m16.945s 00:06:56.347 user 0m4.581s 00:06:56.347 sys 0m9.224s 00:06:56.347 18:51:10 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.347 18:51:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:56.347 ************************************ 00:06:56.347 END TEST driver 00:06:56.347 ************************************ 00:06:56.347 18:51:11 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:06:56.347 18:51:11 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.347 18:51:11 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.347 18:51:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:56.347 ************************************ 00:06:56.347 START TEST devices 00:06:56.347 ************************************ 00:06:56.347 18:51:11 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:06:56.605 * Looking for test storage... 00:06:56.605 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:56.605 18:51:11 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:56.605 18:51:11 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:56.606 18:51:11 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:56.606 18:51:11 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:00.805 18:51:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:00.805 18:51:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:07:00.805 18:51:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:07:00.805 18:51:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:07:00.805 18:51:15 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:07:00.805 18:51:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:00.806 18:51:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:00.806 18:51:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:00.806 No valid GPT data, bailing 00:07:00.806 18:51:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:00.806 18:51:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:00.806 18:51:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:00.806 18:51:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:00.806 18:51:15 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:00.806 18:51:15 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.806 18:51:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:00.806 ************************************ 00:07:00.806 START TEST nvme_mount 00:07:00.806 ************************************ 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:00.806 18:51:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:02.184 Creating new GPT entries in memory. 00:07:02.184 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:02.184 other utilities. 00:07:02.184 18:51:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:02.184 18:51:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:02.184 18:51:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:02.184 18:51:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:02.184 18:51:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:03.121 Creating new GPT entries in memory. 00:07:03.121 The operation has completed successfully. 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1545797 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:03.121 18:51:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:07.316 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:07.317 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:07.317 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:07.317 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:07:07.317 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:07.317 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:07.317 18:51:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:07.317 18:51:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:11.509 18:51:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.509 18:51:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:15.701 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:15.701 00:07:15.701 real 0m14.868s 00:07:15.701 user 0m4.378s 00:07:15.701 sys 0m8.467s 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:15.701 18:51:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:15.701 ************************************ 00:07:15.701 END TEST nvme_mount 00:07:15.701 ************************************ 00:07:15.701 18:51:30 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:15.701 18:51:30 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:15.701 18:51:30 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:15.701 18:51:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:15.701 ************************************ 00:07:15.701 START TEST dm_mount 00:07:15.701 ************************************ 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:15.701 18:51:30 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:17.078 Creating new GPT entries in memory. 00:07:17.078 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:17.078 other utilities. 00:07:17.078 18:51:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:17.078 18:51:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:17.078 18:51:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:17.078 18:51:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:17.078 18:51:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:18.054 Creating new GPT entries in memory. 00:07:18.054 The operation has completed successfully. 00:07:18.054 18:51:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:18.054 18:51:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:18.054 18:51:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:18.054 18:51:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:18.054 18:51:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:19.016 The operation has completed successfully. 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1551058 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:19.016 18:51:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:23.202 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:23.203 18:51:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:27.391 18:51:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:27.391 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:27.391 18:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:27.391 18:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:27.391 00:07:27.391 real 0m11.575s 00:07:27.391 user 0m2.941s 00:07:27.391 sys 0m5.754s 00:07:27.391 18:51:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.391 18:51:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:27.391 ************************************ 00:07:27.391 END TEST dm_mount 00:07:27.391 ************************************ 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:27.391 18:51:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:27.649 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:27.649 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:07:27.649 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:27.649 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:27.649 18:51:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:27.649 00:07:27.649 real 0m31.287s 00:07:27.649 user 0m8.873s 00:07:27.649 sys 0m17.379s 00:07:27.649 18:51:42 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.649 18:51:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:27.649 ************************************ 00:07:27.649 END TEST devices 00:07:27.649 ************************************ 00:07:27.649 00:07:27.649 real 1m49.333s 00:07:27.649 user 0m33.607s 00:07:27.649 sys 1m4.646s 00:07:27.649 18:51:42 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.649 18:51:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:27.649 ************************************ 00:07:27.649 END TEST setup.sh 00:07:27.649 ************************************ 00:07:27.907 18:51:42 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:07:32.094 Hugepages 00:07:32.094 node hugesize free / total 00:07:32.094 node0 1048576kB 0 / 0 00:07:32.095 node0 2048kB 1024 / 1024 00:07:32.095 node1 1048576kB 0 / 0 00:07:32.095 node1 2048kB 1024 / 1024 00:07:32.095 00:07:32.095 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:32.095 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:32.095 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:32.095 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:32.095 18:51:46 -- spdk/autotest.sh@130 -- # uname -s 00:07:32.095 18:51:46 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:32.095 18:51:46 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:32.095 18:51:46 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:07:36.283 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:36.283 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:37.659 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:07:37.659 18:51:52 -- common/autotest_common.sh@1531 -- # sleep 1 00:07:39.035 18:51:53 -- common/autotest_common.sh@1532 -- # bdfs=() 00:07:39.035 18:51:53 -- common/autotest_common.sh@1532 -- # local bdfs 00:07:39.035 18:51:53 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:07:39.035 18:51:53 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:07:39.035 18:51:53 -- common/autotest_common.sh@1512 -- # bdfs=() 00:07:39.035 18:51:53 -- common/autotest_common.sh@1512 -- # local bdfs 00:07:39.035 18:51:53 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:39.035 18:51:53 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:39.035 18:51:53 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:07:39.035 18:51:53 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:07:39.035 18:51:53 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:07:39.035 18:51:53 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:43.223 Waiting for block devices as requested 00:07:43.223 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:43.223 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:43.223 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:43.223 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:43.223 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:43.223 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:43.483 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:43.483 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:43.483 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:43.742 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:43.742 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:43.742 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:44.000 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:44.000 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:44.000 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:44.258 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:44.258 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:44.517 18:51:59 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:07:44.517 18:51:59 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:07:44.517 18:51:59 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:07:44.517 18:51:59 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:07:44.517 18:51:59 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1544 -- # grep oacs 00:07:44.517 18:51:59 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:07:44.517 18:51:59 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:07:44.517 18:51:59 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:07:44.517 18:51:59 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:07:44.517 18:51:59 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:07:44.517 18:51:59 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:07:44.517 18:51:59 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:07:44.517 18:51:59 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:07:44.517 18:51:59 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:07:44.517 18:51:59 -- common/autotest_common.sh@1556 -- # continue 00:07:44.517 18:51:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:44.517 18:51:59 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:44.517 18:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:44.517 18:51:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:44.517 18:51:59 -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:44.517 18:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:44.517 18:51:59 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:07:48.706 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:48.706 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:48.707 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:48.707 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:50.647 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:07:50.647 18:52:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:50.647 18:52:04 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:50.647 18:52:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.647 18:52:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:50.647 18:52:05 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:07:50.647 18:52:05 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:07:50.647 18:52:05 -- common/autotest_common.sh@1576 -- # bdfs=() 00:07:50.647 18:52:05 -- common/autotest_common.sh@1576 -- # local bdfs 00:07:50.647 18:52:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:07:50.647 18:52:05 -- common/autotest_common.sh@1512 -- # bdfs=() 00:07:50.647 18:52:05 -- common/autotest_common.sh@1512 -- # local bdfs 00:07:50.647 18:52:05 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:50.647 18:52:05 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:50.647 18:52:05 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:07:50.647 18:52:05 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:07:50.647 18:52:05 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:07:50.647 18:52:05 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:07:50.647 18:52:05 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:07:50.647 18:52:05 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:07:50.647 18:52:05 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:50.647 18:52:05 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:07:50.647 18:52:05 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:d8:00.0 00:07:50.647 18:52:05 -- common/autotest_common.sh@1591 -- # [[ -z 0000:d8:00.0 ]] 00:07:50.647 18:52:05 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1562606 00:07:50.647 18:52:05 -- common/autotest_common.sh@1597 -- # waitforlisten 1562606 00:07:50.647 18:52:05 -- common/autotest_common.sh@830 -- # '[' -z 1562606 ']' 00:07:50.647 18:52:05 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.647 18:52:05 -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:50.647 18:52:05 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.647 18:52:05 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:50.647 18:52:05 -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:50.647 18:52:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.647 [2024-06-10 18:52:05.231797] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:07:50.647 [2024-06-10 18:52:05.231860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562606 ] 00:07:50.647 [2024-06-10 18:52:05.352483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.919 [2024-06-10 18:52:05.440018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.486 18:52:06 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:51.486 18:52:06 -- common/autotest_common.sh@863 -- # return 0 00:07:51.486 18:52:06 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:07:51.486 18:52:06 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:07:51.486 18:52:06 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:07:54.769 nvme0n1 00:07:54.769 18:52:09 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:54.769 [2024-06-10 18:52:09.343717] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:54.769 request: 00:07:54.769 { 00:07:54.769 "nvme_ctrlr_name": "nvme0", 00:07:54.769 "password": "test", 00:07:54.769 "method": "bdev_nvme_opal_revert", 00:07:54.769 "req_id": 1 00:07:54.769 } 00:07:54.769 Got JSON-RPC error response 00:07:54.769 response: 00:07:54.769 { 00:07:54.769 "code": -32602, 00:07:54.769 "message": "Invalid parameters" 00:07:54.769 } 00:07:54.769 18:52:09 -- common/autotest_common.sh@1603 -- # true 00:07:54.769 18:52:09 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:07:54.769 18:52:09 -- common/autotest_common.sh@1607 -- # killprocess 1562606 00:07:54.769 18:52:09 -- common/autotest_common.sh@949 -- # '[' -z 1562606 ']' 00:07:54.769 18:52:09 -- common/autotest_common.sh@953 -- # kill -0 1562606 00:07:54.769 18:52:09 -- common/autotest_common.sh@954 -- # uname 00:07:54.769 18:52:09 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:54.769 18:52:09 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1562606 00:07:54.769 18:52:09 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:54.769 18:52:09 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:54.769 18:52:09 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1562606' 00:07:54.769 killing process with pid 1562606 00:07:54.769 18:52:09 -- common/autotest_common.sh@968 -- # kill 1562606 00:07:54.769 18:52:09 -- common/autotest_common.sh@973 -- # wait 1562606 00:07:57.297 18:52:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:57.297 18:52:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:57.297 18:52:11 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:07:57.297 18:52:11 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:07:57.297 18:52:11 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:07:57.864 Restarting all devices. 00:08:04.449 lstat() error: No such file or directory 00:08:04.449 QAT Error: No GENERAL section found 00:08:04.449 Failed to configure qat_dev0 00:08:04.449 lstat() error: No such file or directory 00:08:04.449 QAT Error: No GENERAL section found 00:08:04.449 Failed to configure qat_dev1 00:08:04.449 lstat() error: No such file or directory 00:08:04.449 QAT Error: No GENERAL section found 00:08:04.449 Failed to configure qat_dev2 00:08:04.449 lstat() error: No such file or directory 00:08:04.449 QAT Error: No GENERAL section found 00:08:04.449 Failed to configure qat_dev3 00:08:04.449 lstat() error: No such file or directory 00:08:04.449 QAT Error: No GENERAL section found 00:08:04.449 Failed to configure qat_dev4 00:08:04.449 enable sriov 00:08:04.449 Checking status of all devices. 00:08:04.449 There is 5 QAT acceleration device(s) in the system: 00:08:04.449 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:08:04.449 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:08:04.449 qat_dev2 - type: c6xx, inst_id: 2, node_id: 1, bsf: 0000:b4:00.0, #accel: 5 #engines: 10 state: down 00:08:04.449 qat_dev3 - type: c6xx, inst_id: 3, node_id: 1, bsf: 0000:b6:00.0, #accel: 5 #engines: 10 state: down 00:08:04.449 qat_dev4 - type: c6xx, inst_id: 4, node_id: 1, bsf: 0000:b8:00.0, #accel: 5 #engines: 10 state: down 00:08:04.449 0000:3d:00.0 set to 16 VFs 00:08:05.387 0000:3f:00.0 set to 16 VFs 00:08:05.956 0000:b4:00.0 set to 16 VFs 00:08:06.893 0000:b6:00.0 set to 16 VFs 00:08:07.462 0000:b8:00.0 set to 16 VFs 00:08:09.996 Properly configured the qat device with driver uio_pci_generic. 00:08:09.996 18:52:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:09.996 18:52:24 -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:09.996 18:52:24 -- common/autotest_common.sh@10 -- # set +x 00:08:09.996 18:52:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:09.996 18:52:24 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:08:09.996 18:52:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:09.996 18:52:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.996 18:52:24 -- common/autotest_common.sh@10 -- # set +x 00:08:09.996 ************************************ 00:08:09.996 START TEST env 00:08:09.996 ************************************ 00:08:09.996 18:52:24 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:08:09.996 * Looking for test storage... 00:08:09.996 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:08:09.996 18:52:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:08:10.255 18:52:24 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:10.255 18:52:24 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.255 18:52:24 env -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 ************************************ 00:08:10.255 START TEST env_memory 00:08:10.255 ************************************ 00:08:10.255 18:52:24 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:08:10.255 00:08:10.255 00:08:10.255 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.255 http://cunit.sourceforge.net/ 00:08:10.255 00:08:10.255 00:08:10.255 Suite: memory 00:08:10.255 Test: alloc and free memory map ...[2024-06-10 18:52:24.831112] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:10.255 passed 00:08:10.255 Test: mem map translation ...[2024-06-10 18:52:24.848966] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:10.255 [2024-06-10 18:52:24.848982] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:10.255 [2024-06-10 18:52:24.849016] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:10.255 [2024-06-10 18:52:24.849026] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:10.255 passed 00:08:10.256 Test: mem map registration ...[2024-06-10 18:52:24.883902] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:10.256 [2024-06-10 18:52:24.883919] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:10.256 passed 00:08:10.256 Test: mem map adjacent registrations ...passed 00:08:10.256 00:08:10.256 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.256 suites 1 1 n/a 0 0 00:08:10.256 tests 4 4 4 0 0 00:08:10.256 asserts 152 152 152 0 n/a 00:08:10.256 00:08:10.256 Elapsed time = 0.129 seconds 00:08:10.256 00:08:10.256 real 0m0.142s 00:08:10.256 user 0m0.129s 00:08:10.256 sys 0m0.013s 00:08:10.256 18:52:24 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.256 18:52:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:10.256 ************************************ 00:08:10.256 END TEST env_memory 00:08:10.256 ************************************ 00:08:10.256 18:52:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:10.256 18:52:24 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:10.256 18:52:24 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.256 18:52:24 env -- common/autotest_common.sh@10 -- # set +x 00:08:10.256 ************************************ 00:08:10.256 START TEST env_vtophys 00:08:10.256 ************************************ 00:08:10.256 18:52:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:10.516 EAL: lib.eal log level changed from notice to debug 00:08:10.517 EAL: Detected lcore 0 as core 0 on socket 0 00:08:10.517 EAL: Detected lcore 1 as core 1 on socket 0 00:08:10.517 EAL: Detected lcore 2 as core 2 on socket 0 00:08:10.517 EAL: Detected lcore 3 as core 3 on socket 0 00:08:10.517 EAL: Detected lcore 4 as core 4 on socket 0 00:08:10.517 EAL: Detected lcore 5 as core 5 on socket 0 00:08:10.517 EAL: Detected lcore 6 as core 6 on socket 0 00:08:10.517 EAL: Detected lcore 7 as core 8 on socket 0 00:08:10.517 EAL: Detected lcore 8 as core 9 on socket 0 00:08:10.517 EAL: Detected lcore 9 as core 10 on socket 0 00:08:10.517 EAL: Detected lcore 10 as core 11 on socket 0 00:08:10.517 EAL: Detected lcore 11 as core 12 on socket 0 00:08:10.517 EAL: Detected lcore 12 as core 13 on socket 0 00:08:10.517 EAL: Detected lcore 13 as core 14 on socket 0 00:08:10.517 EAL: Detected lcore 14 as core 16 on socket 0 00:08:10.517 EAL: Detected lcore 15 as core 17 on socket 0 00:08:10.517 EAL: Detected lcore 16 as core 18 on socket 0 00:08:10.517 EAL: Detected lcore 17 as core 19 on socket 0 00:08:10.517 EAL: Detected lcore 18 as core 20 on socket 0 00:08:10.517 EAL: Detected lcore 19 as core 21 on socket 0 00:08:10.517 EAL: Detected lcore 20 as core 22 on socket 0 00:08:10.517 EAL: Detected lcore 21 as core 24 on socket 0 00:08:10.517 EAL: Detected lcore 22 as core 25 on socket 0 00:08:10.517 EAL: Detected lcore 23 as core 26 on socket 0 00:08:10.517 EAL: Detected lcore 24 as core 27 on socket 0 00:08:10.517 EAL: Detected lcore 25 as core 28 on socket 0 00:08:10.517 EAL: Detected lcore 26 as core 29 on socket 0 00:08:10.517 EAL: Detected lcore 27 as core 30 on socket 0 00:08:10.517 EAL: Detected lcore 28 as core 0 on socket 1 00:08:10.517 EAL: Detected lcore 29 as core 1 on socket 1 00:08:10.517 EAL: Detected lcore 30 as core 2 on socket 1 00:08:10.517 EAL: Detected lcore 31 as core 3 on socket 1 00:08:10.517 EAL: Detected lcore 32 as core 4 on socket 1 00:08:10.517 EAL: Detected lcore 33 as core 5 on socket 1 00:08:10.517 EAL: Detected lcore 34 as core 6 on socket 1 00:08:10.517 EAL: Detected lcore 35 as core 8 on socket 1 00:08:10.517 EAL: Detected lcore 36 as core 9 on socket 1 00:08:10.517 EAL: Detected lcore 37 as core 10 on socket 1 00:08:10.517 EAL: Detected lcore 38 as core 11 on socket 1 00:08:10.517 EAL: Detected lcore 39 as core 12 on socket 1 00:08:10.517 EAL: Detected lcore 40 as core 13 on socket 1 00:08:10.517 EAL: Detected lcore 41 as core 14 on socket 1 00:08:10.517 EAL: Detected lcore 42 as core 16 on socket 1 00:08:10.517 EAL: Detected lcore 43 as core 17 on socket 1 00:08:10.517 EAL: Detected lcore 44 as core 18 on socket 1 00:08:10.517 EAL: Detected lcore 45 as core 19 on socket 1 00:08:10.517 EAL: Detected lcore 46 as core 20 on socket 1 00:08:10.517 EAL: Detected lcore 47 as core 21 on socket 1 00:08:10.517 EAL: Detected lcore 48 as core 22 on socket 1 00:08:10.517 EAL: Detected lcore 49 as core 24 on socket 1 00:08:10.517 EAL: Detected lcore 50 as core 25 on socket 1 00:08:10.517 EAL: Detected lcore 51 as core 26 on socket 1 00:08:10.517 EAL: Detected lcore 52 as core 27 on socket 1 00:08:10.517 EAL: Detected lcore 53 as core 28 on socket 1 00:08:10.517 EAL: Detected lcore 54 as core 29 on socket 1 00:08:10.517 EAL: Detected lcore 55 as core 30 on socket 1 00:08:10.517 EAL: Detected lcore 56 as core 0 on socket 0 00:08:10.517 EAL: Detected lcore 57 as core 1 on socket 0 00:08:10.517 EAL: Detected lcore 58 as core 2 on socket 0 00:08:10.517 EAL: Detected lcore 59 as core 3 on socket 0 00:08:10.517 EAL: Detected lcore 60 as core 4 on socket 0 00:08:10.517 EAL: Detected lcore 61 as core 5 on socket 0 00:08:10.517 EAL: Detected lcore 62 as core 6 on socket 0 00:08:10.517 EAL: Detected lcore 63 as core 8 on socket 0 00:08:10.517 EAL: Detected lcore 64 as core 9 on socket 0 00:08:10.517 EAL: Detected lcore 65 as core 10 on socket 0 00:08:10.517 EAL: Detected lcore 66 as core 11 on socket 0 00:08:10.517 EAL: Detected lcore 67 as core 12 on socket 0 00:08:10.517 EAL: Detected lcore 68 as core 13 on socket 0 00:08:10.517 EAL: Detected lcore 69 as core 14 on socket 0 00:08:10.517 EAL: Detected lcore 70 as core 16 on socket 0 00:08:10.517 EAL: Detected lcore 71 as core 17 on socket 0 00:08:10.517 EAL: Detected lcore 72 as core 18 on socket 0 00:08:10.517 EAL: Detected lcore 73 as core 19 on socket 0 00:08:10.517 EAL: Detected lcore 74 as core 20 on socket 0 00:08:10.517 EAL: Detected lcore 75 as core 21 on socket 0 00:08:10.517 EAL: Detected lcore 76 as core 22 on socket 0 00:08:10.517 EAL: Detected lcore 77 as core 24 on socket 0 00:08:10.517 EAL: Detected lcore 78 as core 25 on socket 0 00:08:10.517 EAL: Detected lcore 79 as core 26 on socket 0 00:08:10.517 EAL: Detected lcore 80 as core 27 on socket 0 00:08:10.517 EAL: Detected lcore 81 as core 28 on socket 0 00:08:10.517 EAL: Detected lcore 82 as core 29 on socket 0 00:08:10.517 EAL: Detected lcore 83 as core 30 on socket 0 00:08:10.517 EAL: Detected lcore 84 as core 0 on socket 1 00:08:10.517 EAL: Detected lcore 85 as core 1 on socket 1 00:08:10.517 EAL: Detected lcore 86 as core 2 on socket 1 00:08:10.517 EAL: Detected lcore 87 as core 3 on socket 1 00:08:10.517 EAL: Detected lcore 88 as core 4 on socket 1 00:08:10.517 EAL: Detected lcore 89 as core 5 on socket 1 00:08:10.517 EAL: Detected lcore 90 as core 6 on socket 1 00:08:10.517 EAL: Detected lcore 91 as core 8 on socket 1 00:08:10.517 EAL: Detected lcore 92 as core 9 on socket 1 00:08:10.517 EAL: Detected lcore 93 as core 10 on socket 1 00:08:10.517 EAL: Detected lcore 94 as core 11 on socket 1 00:08:10.517 EAL: Detected lcore 95 as core 12 on socket 1 00:08:10.517 EAL: Detected lcore 96 as core 13 on socket 1 00:08:10.517 EAL: Detected lcore 97 as core 14 on socket 1 00:08:10.517 EAL: Detected lcore 98 as core 16 on socket 1 00:08:10.517 EAL: Detected lcore 99 as core 17 on socket 1 00:08:10.517 EAL: Detected lcore 100 as core 18 on socket 1 00:08:10.517 EAL: Detected lcore 101 as core 19 on socket 1 00:08:10.517 EAL: Detected lcore 102 as core 20 on socket 1 00:08:10.517 EAL: Detected lcore 103 as core 21 on socket 1 00:08:10.517 EAL: Detected lcore 104 as core 22 on socket 1 00:08:10.517 EAL: Detected lcore 105 as core 24 on socket 1 00:08:10.517 EAL: Detected lcore 106 as core 25 on socket 1 00:08:10.517 EAL: Detected lcore 107 as core 26 on socket 1 00:08:10.517 EAL: Detected lcore 108 as core 27 on socket 1 00:08:10.517 EAL: Detected lcore 109 as core 28 on socket 1 00:08:10.517 EAL: Detected lcore 110 as core 29 on socket 1 00:08:10.517 EAL: Detected lcore 111 as core 30 on socket 1 00:08:10.517 EAL: Maximum logical cores by configuration: 128 00:08:10.517 EAL: Detected CPU lcores: 112 00:08:10.517 EAL: Detected NUMA nodes: 2 00:08:10.517 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:10.517 EAL: Detected shared linkage of DPDK 00:08:10.517 EAL: No shared files mode enabled, IPC will be disabled 00:08:10.517 EAL: No shared files mode enabled, IPC is disabled 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:01.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b4:02.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.2 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.3 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.4 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.5 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.6 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:01.7 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:02.0 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:02.1 wants IOVA as 'PA' 00:08:10.517 EAL: PCI driver qat for device 0000:b6:02.2 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b6:02.3 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b6:02.4 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b6:02.5 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b6:02.6 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b6:02.7 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.0 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.1 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.2 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.3 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.4 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.5 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.6 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:01.7 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.0 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.1 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.2 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.3 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.4 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.5 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.6 wants IOVA as 'PA' 00:08:10.518 EAL: PCI driver qat for device 0000:b8:02.7 wants IOVA as 'PA' 00:08:10.518 EAL: Bus pci wants IOVA as 'PA' 00:08:10.518 EAL: Bus auxiliary wants IOVA as 'DC' 00:08:10.518 EAL: Bus vdev wants IOVA as 'DC' 00:08:10.518 EAL: Selected IOVA mode 'PA' 00:08:10.518 EAL: Probing VFIO support... 00:08:10.518 EAL: IOMMU type 1 (Type 1) is supported 00:08:10.518 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:10.518 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:10.518 EAL: VFIO support initialized 00:08:10.518 EAL: Ask a virtual area of 0x2e000 bytes 00:08:10.518 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:10.518 EAL: Setting up physically contiguous memory... 00:08:10.518 EAL: Setting maximum number of open files to 524288 00:08:10.518 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:10.518 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:10.518 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:10.518 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:10.518 EAL: Ask a virtual area of 0x61000 bytes 00:08:10.518 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:10.518 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:10.518 EAL: Ask a virtual area of 0x400000000 bytes 00:08:10.518 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:10.518 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:10.518 EAL: Hugepages will be freed exactly as allocated. 00:08:10.518 EAL: No shared files mode enabled, IPC is disabled 00:08:10.518 EAL: No shared files mode enabled, IPC is disabled 00:08:10.518 EAL: TSC frequency is ~2500000 KHz 00:08:10.518 EAL: Main lcore 0 is ready (tid=7f617b399b00;cpuset=[0]) 00:08:10.518 EAL: Trying to obtain current memory policy. 00:08:10.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.518 EAL: Restoring previous memory policy: 0 00:08:10.518 EAL: request: mp_malloc_sync 00:08:10.518 EAL: No shared files mode enabled, IPC is disabled 00:08:10.518 EAL: Heap on socket 0 was expanded by 2MB 00:08:10.518 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001000000 00:08:10.518 EAL: PCI memory mapped at 0x202001001000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001002000 00:08:10.518 EAL: PCI memory mapped at 0x202001003000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001004000 00:08:10.518 EAL: PCI memory mapped at 0x202001005000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001006000 00:08:10.518 EAL: PCI memory mapped at 0x202001007000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001008000 00:08:10.518 EAL: PCI memory mapped at 0x202001009000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x20200100a000 00:08:10.518 EAL: PCI memory mapped at 0x20200100b000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x20200100c000 00:08:10.518 EAL: PCI memory mapped at 0x20200100d000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x20200100e000 00:08:10.518 EAL: PCI memory mapped at 0x20200100f000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001010000 00:08:10.518 EAL: PCI memory mapped at 0x202001011000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001012000 00:08:10.518 EAL: PCI memory mapped at 0x202001013000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001014000 00:08:10.518 EAL: PCI memory mapped at 0x202001015000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001016000 00:08:10.518 EAL: PCI memory mapped at 0x202001017000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001018000 00:08:10.518 EAL: PCI memory mapped at 0x202001019000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x20200101a000 00:08:10.518 EAL: PCI memory mapped at 0x20200101b000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x20200101c000 00:08:10.518 EAL: PCI memory mapped at 0x20200101d000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:10.518 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x20200101e000 00:08:10.518 EAL: PCI memory mapped at 0x20200101f000 00:08:10.518 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:10.518 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:08:10.518 EAL: probe driver: 8086:37c9 qat 00:08:10.518 EAL: PCI memory mapped at 0x202001020000 00:08:10.519 EAL: PCI memory mapped at 0x202001021000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001022000 00:08:10.519 EAL: PCI memory mapped at 0x202001023000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001024000 00:08:10.519 EAL: PCI memory mapped at 0x202001025000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001026000 00:08:10.519 EAL: PCI memory mapped at 0x202001027000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001028000 00:08:10.519 EAL: PCI memory mapped at 0x202001029000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200102a000 00:08:10.519 EAL: PCI memory mapped at 0x20200102b000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200102c000 00:08:10.519 EAL: PCI memory mapped at 0x20200102d000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200102e000 00:08:10.519 EAL: PCI memory mapped at 0x20200102f000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001030000 00:08:10.519 EAL: PCI memory mapped at 0x202001031000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001032000 00:08:10.519 EAL: PCI memory mapped at 0x202001033000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001034000 00:08:10.519 EAL: PCI memory mapped at 0x202001035000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001036000 00:08:10.519 EAL: PCI memory mapped at 0x202001037000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001038000 00:08:10.519 EAL: PCI memory mapped at 0x202001039000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200103a000 00:08:10.519 EAL: PCI memory mapped at 0x20200103b000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200103c000 00:08:10.519 EAL: PCI memory mapped at 0x20200103d000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:10.519 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200103e000 00:08:10.519 EAL: PCI memory mapped at 0x20200103f000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:10.519 EAL: PCI device 0000:b4:01.0 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001040000 00:08:10.519 EAL: PCI memory mapped at 0x202001041000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.0 (socket 1) 00:08:10.519 EAL: Trying to obtain current memory policy. 00:08:10.519 EAL: Setting policy MPOL_PREFERRED for socket 1 00:08:10.519 EAL: Restoring previous memory policy: 4 00:08:10.519 EAL: request: mp_malloc_sync 00:08:10.519 EAL: No shared files mode enabled, IPC is disabled 00:08:10.519 EAL: Heap on socket 1 was expanded by 2MB 00:08:10.519 EAL: PCI device 0000:b4:01.1 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001042000 00:08:10.519 EAL: PCI memory mapped at 0x202001043000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.1 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:01.2 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001044000 00:08:10.519 EAL: PCI memory mapped at 0x202001045000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.2 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:01.3 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001046000 00:08:10.519 EAL: PCI memory mapped at 0x202001047000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.3 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:01.4 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001048000 00:08:10.519 EAL: PCI memory mapped at 0x202001049000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.4 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:01.5 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200104a000 00:08:10.519 EAL: PCI memory mapped at 0x20200104b000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.5 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:01.6 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200104c000 00:08:10.519 EAL: PCI memory mapped at 0x20200104d000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.6 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:01.7 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200104e000 00:08:10.519 EAL: PCI memory mapped at 0x20200104f000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.7 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.0 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001050000 00:08:10.519 EAL: PCI memory mapped at 0x202001051000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.0 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.1 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001052000 00:08:10.519 EAL: PCI memory mapped at 0x202001053000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.1 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.2 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001054000 00:08:10.519 EAL: PCI memory mapped at 0x202001055000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.2 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.3 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001056000 00:08:10.519 EAL: PCI memory mapped at 0x202001057000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.3 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.4 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001058000 00:08:10.519 EAL: PCI memory mapped at 0x202001059000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.4 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.5 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200105a000 00:08:10.519 EAL: PCI memory mapped at 0x20200105b000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.5 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.6 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200105c000 00:08:10.519 EAL: PCI memory mapped at 0x20200105d000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.6 (socket 1) 00:08:10.519 EAL: PCI device 0000:b4:02.7 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x20200105e000 00:08:10.519 EAL: PCI memory mapped at 0x20200105f000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.7 (socket 1) 00:08:10.519 EAL: PCI device 0000:b6:01.0 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001060000 00:08:10.519 EAL: PCI memory mapped at 0x202001061000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.0 (socket 1) 00:08:10.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.519 EAL: PCI memory unmapped at 0x202001060000 00:08:10.519 EAL: PCI memory unmapped at 0x202001061000 00:08:10.519 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:10.519 EAL: PCI device 0000:b6:01.1 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001062000 00:08:10.519 EAL: PCI memory mapped at 0x202001063000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.1 (socket 1) 00:08:10.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.519 EAL: PCI memory unmapped at 0x202001062000 00:08:10.519 EAL: PCI memory unmapped at 0x202001063000 00:08:10.519 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:10.519 EAL: PCI device 0000:b6:01.2 on NUMA socket 1 00:08:10.519 EAL: probe driver: 8086:37c9 qat 00:08:10.519 EAL: PCI memory mapped at 0x202001064000 00:08:10.519 EAL: PCI memory mapped at 0x202001065000 00:08:10.519 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.2 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001064000 00:08:10.520 EAL: PCI memory unmapped at 0x202001065000 00:08:10.520 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:01.3 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001066000 00:08:10.520 EAL: PCI memory mapped at 0x202001067000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.3 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001066000 00:08:10.520 EAL: PCI memory unmapped at 0x202001067000 00:08:10.520 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:01.4 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001068000 00:08:10.520 EAL: PCI memory mapped at 0x202001069000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.4 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001068000 00:08:10.520 EAL: PCI memory unmapped at 0x202001069000 00:08:10.520 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:01.5 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200106a000 00:08:10.520 EAL: PCI memory mapped at 0x20200106b000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.5 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200106a000 00:08:10.520 EAL: PCI memory unmapped at 0x20200106b000 00:08:10.520 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:01.6 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200106c000 00:08:10.520 EAL: PCI memory mapped at 0x20200106d000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.6 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200106c000 00:08:10.520 EAL: PCI memory unmapped at 0x20200106d000 00:08:10.520 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:01.7 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200106e000 00:08:10.520 EAL: PCI memory mapped at 0x20200106f000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.7 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200106e000 00:08:10.520 EAL: PCI memory unmapped at 0x20200106f000 00:08:10.520 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.0 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001070000 00:08:10.520 EAL: PCI memory mapped at 0x202001071000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.0 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001070000 00:08:10.520 EAL: PCI memory unmapped at 0x202001071000 00:08:10.520 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.1 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001072000 00:08:10.520 EAL: PCI memory mapped at 0x202001073000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.1 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001072000 00:08:10.520 EAL: PCI memory unmapped at 0x202001073000 00:08:10.520 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.2 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001074000 00:08:10.520 EAL: PCI memory mapped at 0x202001075000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.2 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001074000 00:08:10.520 EAL: PCI memory unmapped at 0x202001075000 00:08:10.520 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.3 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001076000 00:08:10.520 EAL: PCI memory mapped at 0x202001077000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.3 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001076000 00:08:10.520 EAL: PCI memory unmapped at 0x202001077000 00:08:10.520 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.4 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001078000 00:08:10.520 EAL: PCI memory mapped at 0x202001079000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.4 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001078000 00:08:10.520 EAL: PCI memory unmapped at 0x202001079000 00:08:10.520 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.5 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200107a000 00:08:10.520 EAL: PCI memory mapped at 0x20200107b000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.5 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200107a000 00:08:10.520 EAL: PCI memory unmapped at 0x20200107b000 00:08:10.520 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.6 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200107c000 00:08:10.520 EAL: PCI memory mapped at 0x20200107d000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.6 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200107c000 00:08:10.520 EAL: PCI memory unmapped at 0x20200107d000 00:08:10.520 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:10.520 EAL: PCI device 0000:b6:02.7 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200107e000 00:08:10.520 EAL: PCI memory mapped at 0x20200107f000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.7 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200107e000 00:08:10.520 EAL: PCI memory unmapped at 0x20200107f000 00:08:10.520 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.0 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001080000 00:08:10.520 EAL: PCI memory mapped at 0x202001081000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.0 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001080000 00:08:10.520 EAL: PCI memory unmapped at 0x202001081000 00:08:10.520 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.1 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001082000 00:08:10.520 EAL: PCI memory mapped at 0x202001083000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.1 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001082000 00:08:10.520 EAL: PCI memory unmapped at 0x202001083000 00:08:10.520 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.2 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001084000 00:08:10.520 EAL: PCI memory mapped at 0x202001085000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.2 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001084000 00:08:10.520 EAL: PCI memory unmapped at 0x202001085000 00:08:10.520 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.3 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001086000 00:08:10.520 EAL: PCI memory mapped at 0x202001087000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.3 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001086000 00:08:10.520 EAL: PCI memory unmapped at 0x202001087000 00:08:10.520 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.4 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x202001088000 00:08:10.520 EAL: PCI memory mapped at 0x202001089000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.4 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x202001088000 00:08:10.520 EAL: PCI memory unmapped at 0x202001089000 00:08:10.520 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.5 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200108a000 00:08:10.520 EAL: PCI memory mapped at 0x20200108b000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.5 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200108a000 00:08:10.520 EAL: PCI memory unmapped at 0x20200108b000 00:08:10.520 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:10.520 EAL: PCI device 0000:b8:01.6 on NUMA socket 1 00:08:10.520 EAL: probe driver: 8086:37c9 qat 00:08:10.520 EAL: PCI memory mapped at 0x20200108c000 00:08:10.520 EAL: PCI memory mapped at 0x20200108d000 00:08:10.520 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.6 (socket 1) 00:08:10.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.520 EAL: PCI memory unmapped at 0x20200108c000 00:08:10.520 EAL: PCI memory unmapped at 0x20200108d000 00:08:10.521 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:01.7 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x20200108e000 00:08:10.521 EAL: PCI memory mapped at 0x20200108f000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.7 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x20200108e000 00:08:10.521 EAL: PCI memory unmapped at 0x20200108f000 00:08:10.521 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.0 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x202001090000 00:08:10.521 EAL: PCI memory mapped at 0x202001091000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.0 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x202001090000 00:08:10.521 EAL: PCI memory unmapped at 0x202001091000 00:08:10.521 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.1 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x202001092000 00:08:10.521 EAL: PCI memory mapped at 0x202001093000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.1 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x202001092000 00:08:10.521 EAL: PCI memory unmapped at 0x202001093000 00:08:10.521 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.2 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x202001094000 00:08:10.521 EAL: PCI memory mapped at 0x202001095000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.2 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x202001094000 00:08:10.521 EAL: PCI memory unmapped at 0x202001095000 00:08:10.521 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.3 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x202001096000 00:08:10.521 EAL: PCI memory mapped at 0x202001097000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.3 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x202001096000 00:08:10.521 EAL: PCI memory unmapped at 0x202001097000 00:08:10.521 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.4 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x202001098000 00:08:10.521 EAL: PCI memory mapped at 0x202001099000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.4 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x202001098000 00:08:10.521 EAL: PCI memory unmapped at 0x202001099000 00:08:10.521 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.5 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x20200109a000 00:08:10.521 EAL: PCI memory mapped at 0x20200109b000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.5 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x20200109a000 00:08:10.521 EAL: PCI memory unmapped at 0x20200109b000 00:08:10.521 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.6 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x20200109c000 00:08:10.521 EAL: PCI memory mapped at 0x20200109d000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.6 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x20200109c000 00:08:10.521 EAL: PCI memory unmapped at 0x20200109d000 00:08:10.521 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:10.521 EAL: PCI device 0000:b8:02.7 on NUMA socket 1 00:08:10.521 EAL: probe driver: 8086:37c9 qat 00:08:10.521 EAL: PCI memory mapped at 0x20200109e000 00:08:10.521 EAL: PCI memory mapped at 0x20200109f000 00:08:10.521 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.7 (socket 1) 00:08:10.521 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.521 EAL: PCI memory unmapped at 0x20200109e000 00:08:10.521 EAL: PCI memory unmapped at 0x20200109f000 00:08:10.521 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:10.521 EAL: Mem event callback 'spdk:(nil)' registered 00:08:10.521 00:08:10.521 00:08:10.521 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.521 http://cunit.sourceforge.net/ 00:08:10.521 00:08:10.521 00:08:10.521 Suite: components_suite 00:08:10.521 Test: vtophys_malloc_test ...passed 00:08:10.521 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:10.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.521 EAL: Restoring previous memory policy: 4 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was expanded by 4MB 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was shrunk by 4MB 00:08:10.521 EAL: Trying to obtain current memory policy. 00:08:10.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.521 EAL: Restoring previous memory policy: 4 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was expanded by 6MB 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was shrunk by 6MB 00:08:10.521 EAL: Trying to obtain current memory policy. 00:08:10.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.521 EAL: Restoring previous memory policy: 4 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was expanded by 10MB 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was shrunk by 10MB 00:08:10.521 EAL: Trying to obtain current memory policy. 00:08:10.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.521 EAL: Restoring previous memory policy: 4 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was expanded by 18MB 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was shrunk by 18MB 00:08:10.521 EAL: Trying to obtain current memory policy. 00:08:10.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.521 EAL: Restoring previous memory policy: 4 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was expanded by 34MB 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was shrunk by 34MB 00:08:10.521 EAL: Trying to obtain current memory policy. 00:08:10.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.521 EAL: Restoring previous memory policy: 4 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.521 EAL: No shared files mode enabled, IPC is disabled 00:08:10.521 EAL: Heap on socket 0 was expanded by 66MB 00:08:10.521 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.521 EAL: request: mp_malloc_sync 00:08:10.522 EAL: No shared files mode enabled, IPC is disabled 00:08:10.522 EAL: Heap on socket 0 was shrunk by 66MB 00:08:10.522 EAL: Trying to obtain current memory policy. 00:08:10.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.522 EAL: Restoring previous memory policy: 4 00:08:10.522 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.522 EAL: request: mp_malloc_sync 00:08:10.522 EAL: No shared files mode enabled, IPC is disabled 00:08:10.522 EAL: Heap on socket 0 was expanded by 130MB 00:08:10.522 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.781 EAL: request: mp_malloc_sync 00:08:10.781 EAL: No shared files mode enabled, IPC is disabled 00:08:10.781 EAL: Heap on socket 0 was shrunk by 130MB 00:08:10.781 EAL: Trying to obtain current memory policy. 00:08:10.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.781 EAL: Restoring previous memory policy: 4 00:08:10.781 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.781 EAL: request: mp_malloc_sync 00:08:10.781 EAL: No shared files mode enabled, IPC is disabled 00:08:10.781 EAL: Heap on socket 0 was expanded by 258MB 00:08:10.781 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.781 EAL: request: mp_malloc_sync 00:08:10.781 EAL: No shared files mode enabled, IPC is disabled 00:08:10.781 EAL: Heap on socket 0 was shrunk by 258MB 00:08:10.781 EAL: Trying to obtain current memory policy. 00:08:10.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:10.781 EAL: Restoring previous memory policy: 4 00:08:10.781 EAL: Calling mem event callback 'spdk:(nil)' 00:08:10.781 EAL: request: mp_malloc_sync 00:08:10.781 EAL: No shared files mode enabled, IPC is disabled 00:08:10.781 EAL: Heap on socket 0 was expanded by 514MB 00:08:11.040 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.040 EAL: request: mp_malloc_sync 00:08:11.040 EAL: No shared files mode enabled, IPC is disabled 00:08:11.040 EAL: Heap on socket 0 was shrunk by 514MB 00:08:11.040 EAL: Trying to obtain current memory policy. 00:08:11.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.372 EAL: Restoring previous memory policy: 4 00:08:11.372 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.372 EAL: request: mp_malloc_sync 00:08:11.372 EAL: No shared files mode enabled, IPC is disabled 00:08:11.372 EAL: Heap on socket 0 was expanded by 1026MB 00:08:11.372 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.646 EAL: request: mp_malloc_sync 00:08:11.646 EAL: No shared files mode enabled, IPC is disabled 00:08:11.646 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:11.646 passed 00:08:11.646 00:08:11.646 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.646 suites 1 1 n/a 0 0 00:08:11.646 tests 2 2 2 0 0 00:08:11.646 asserts 7570 7570 7570 0 n/a 00:08:11.646 00:08:11.646 Elapsed time = 1.018 seconds 00:08:11.646 EAL: No shared files mode enabled, IPC is disabled 00:08:11.646 EAL: No shared files mode enabled, IPC is disabled 00:08:11.646 EAL: No shared files mode enabled, IPC is disabled 00:08:11.646 00:08:11.646 real 0m1.223s 00:08:11.646 user 0m0.669s 00:08:11.646 sys 0m0.524s 00:08:11.646 18:52:26 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.646 18:52:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:11.646 ************************************ 00:08:11.646 END TEST env_vtophys 00:08:11.646 ************************************ 00:08:11.646 18:52:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:08:11.647 18:52:26 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:11.647 18:52:26 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:11.647 18:52:26 env -- common/autotest_common.sh@10 -- # set +x 00:08:11.647 ************************************ 00:08:11.647 START TEST env_pci 00:08:11.647 ************************************ 00:08:11.647 18:52:26 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:08:11.647 00:08:11.647 00:08:11.647 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.647 http://cunit.sourceforge.net/ 00:08:11.647 00:08:11.647 00:08:11.647 Suite: pci 00:08:11.647 Test: pci_hook ...[2024-06-10 18:52:26.338762] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1566438 has claimed it 00:08:11.647 EAL: Cannot find device (10000:00:01.0) 00:08:11.647 EAL: Failed to attach device on primary process 00:08:11.647 passed 00:08:11.647 00:08:11.647 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.647 suites 1 1 n/a 0 0 00:08:11.647 tests 1 1 1 0 0 00:08:11.647 asserts 25 25 25 0 n/a 00:08:11.647 00:08:11.647 Elapsed time = 0.045 seconds 00:08:11.647 00:08:11.647 real 0m0.074s 00:08:11.647 user 0m0.024s 00:08:11.647 sys 0m0.049s 00:08:11.647 18:52:26 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.647 18:52:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:11.647 ************************************ 00:08:11.647 END TEST env_pci 00:08:11.647 ************************************ 00:08:11.906 18:52:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:11.906 18:52:26 env -- env/env.sh@15 -- # uname 00:08:11.906 18:52:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:11.906 18:52:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:11.906 18:52:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:11.906 18:52:26 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:08:11.906 18:52:26 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:11.906 18:52:26 env -- common/autotest_common.sh@10 -- # set +x 00:08:11.906 ************************************ 00:08:11.906 START TEST env_dpdk_post_init 00:08:11.906 ************************************ 00:08:11.906 18:52:26 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:11.906 EAL: Detected CPU lcores: 112 00:08:11.906 EAL: Detected NUMA nodes: 2 00:08:11.906 EAL: Detected shared linkage of DPDK 00:08:11.906 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:11.906 EAL: Selected IOVA mode 'PA' 00:08:11.906 EAL: VFIO support initialized 00:08:11.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:11.906 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:11.907 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:08:11.907 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.0 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.0_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.0_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.1 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.1_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.1_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.2 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.2_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.2_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.3 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.3_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.3_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.4 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.4_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.4_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.5 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.5_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.5_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.6 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.6_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.6_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.7 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.7_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:01.7_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.0 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.0_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.0_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.1 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.1_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.1_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.2 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.2_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.2_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.3 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.3_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.3_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.4 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.4_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.4_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.5 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.5_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.5_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.6 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.6_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.6_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.7 (socket 1) 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.7_qat_asym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:08:11.908 CRYPTODEV: Creating cryptodev 0000:b4:02.7_qat_sym 00:08:11.908 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.0 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.1 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.2 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.3 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.4 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.5 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.6 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.7 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.0 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.1 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.2 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.3 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.4 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.5 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.6 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.7 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.0 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.908 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:11.908 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.1 (socket 1) 00:08:11.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.2 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.3 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.4 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.5 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.6 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.7 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.0 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.1 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.2 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.3 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.4 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.5 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.6 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:11.909 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.7 (socket 1) 00:08:11.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.909 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:11.909 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:12.167 EAL: Using IOMMU type 1 (Type 1) 00:08:12.167 EAL: Ignore mapping IO port bar(1) 00:08:12.167 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:12.167 EAL: Ignore mapping IO port bar(1) 00:08:12.167 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:12.167 EAL: Ignore mapping IO port bar(1) 00:08:12.167 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:12.167 EAL: Ignore mapping IO port bar(1) 00:08:12.167 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:12.167 EAL: Ignore mapping IO port bar(1) 00:08:12.167 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:12.168 EAL: Ignore mapping IO port bar(1) 00:08:12.168 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.0 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.1 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.2 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.3 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.4 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.5 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.6 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.7 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.0 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.1 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.2 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.3 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.4 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.5 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.6 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.7 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.0 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.1 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.2 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.3 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.4 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.5 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.6 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.7 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.0 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.1 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:12.168 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.2 (socket 1) 00:08:12.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.168 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:12.169 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.3 (socket 1) 00:08:12.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.169 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:12.169 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.4 (socket 1) 00:08:12.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.169 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:12.169 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.5 (socket 1) 00:08:12.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.169 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:12.169 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.6 (socket 1) 00:08:12.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.169 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:12.169 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.7 (socket 1) 00:08:12.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.169 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:13.104 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:08:17.292 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:08:17.292 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:08:17.292 Starting DPDK initialization... 00:08:17.292 Starting SPDK post initialization... 00:08:17.292 SPDK NVMe probe 00:08:17.292 Attaching to 0000:d8:00.0 00:08:17.292 Attached to 0000:d8:00.0 00:08:17.292 Cleaning up... 00:08:17.292 00:08:17.292 real 0m5.074s 00:08:17.292 user 0m3.692s 00:08:17.292 sys 0m0.440s 00:08:17.292 18:52:31 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.292 18:52:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:17.292 ************************************ 00:08:17.292 END TEST env_dpdk_post_init 00:08:17.292 ************************************ 00:08:17.292 18:52:31 env -- env/env.sh@26 -- # uname 00:08:17.292 18:52:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:17.292 18:52:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.292 18:52:31 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:17.292 18:52:31 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:17.292 18:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.292 ************************************ 00:08:17.292 START TEST env_mem_callbacks 00:08:17.292 ************************************ 00:08:17.292 18:52:31 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:17.292 EAL: Detected CPU lcores: 112 00:08:17.292 EAL: Detected NUMA nodes: 2 00:08:17.292 EAL: Detected shared linkage of DPDK 00:08:17.292 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:17.292 EAL: Selected IOVA mode 'PA' 00:08:17.292 EAL: VFIO support initialized 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.292 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:17.292 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:08:17.292 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.0 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.0_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.0_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.1 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.1_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.1_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.2 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.2_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.2_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.3 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.3_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.3_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.4 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.4_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.4_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.5 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.5_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.5_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.6 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.6_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.6_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:01.7 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.7_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:01.7_qat_sym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.293 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.0 (socket 1) 00:08:17.293 CRYPTODEV: Creating cryptodev 0000:b4:02.0_qat_asym 00:08:17.293 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.0_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.1 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.1_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.1_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.2 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.2_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.2_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.3 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.3_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.3_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.4 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.4_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.4_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.5 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.5_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.5_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.6 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.6_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.6_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b4:02.7 (socket 1) 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.7_qat_asym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:08:17.294 CRYPTODEV: Creating cryptodev 0000:b4:02.7_qat_sym 00:08:17.294 CRYPTODEV: Initialisation parameters - name: 0000:b4:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.0 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.1 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.2 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.3 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.4 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.5 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.6 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:01.7 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.0 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.1 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.2 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.3 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.4 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.5 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.6 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b6:02.7 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.0 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.1 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.2 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.3 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.4 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.5 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.6 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:01.7 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.0 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.1 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.2 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.3 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.4 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.5 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.6 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:17.294 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b8:02.7 (socket 1) 00:08:17.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.294 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:17.294 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:17.294 00:08:17.294 00:08:17.294 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.294 http://cunit.sourceforge.net/ 00:08:17.294 00:08:17.294 00:08:17.294 Suite: memory 00:08:17.294 Test: test ... 00:08:17.294 register 0x200000200000 2097152 00:08:17.294 register 0x201000a00000 2097152 00:08:17.294 malloc 3145728 00:08:17.294 register 0x200000400000 4194304 00:08:17.294 buf 0x200000500000 len 3145728 PASSED 00:08:17.294 malloc 64 00:08:17.294 buf 0x2000004fff40 len 64 PASSED 00:08:17.294 malloc 4194304 00:08:17.294 register 0x200000800000 6291456 00:08:17.294 buf 0x200000a00000 len 4194304 PASSED 00:08:17.294 free 0x200000500000 3145728 00:08:17.294 free 0x2000004fff40 64 00:08:17.295 unregister 0x200000400000 4194304 PASSED 00:08:17.295 free 0x200000a00000 4194304 00:08:17.295 unregister 0x200000800000 6291456 PASSED 00:08:17.295 malloc 8388608 00:08:17.295 register 0x200000400000 10485760 00:08:17.295 buf 0x200000600000 len 8388608 PASSED 00:08:17.295 free 0x200000600000 8388608 00:08:17.295 unregister 0x200000400000 10485760 PASSED 00:08:17.295 passed 00:08:17.295 00:08:17.295 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.295 suites 1 1 n/a 0 0 00:08:17.295 tests 1 1 1 0 0 00:08:17.295 asserts 16 16 16 0 n/a 00:08:17.295 00:08:17.295 Elapsed time = 0.005 seconds 00:08:17.295 00:08:17.295 real 0m0.100s 00:08:17.295 user 0m0.033s 00:08:17.295 sys 0m0.066s 00:08:17.295 18:52:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.295 18:52:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:17.295 ************************************ 00:08:17.295 END TEST env_mem_callbacks 00:08:17.295 ************************************ 00:08:17.295 00:08:17.295 real 0m7.137s 00:08:17.295 user 0m4.738s 00:08:17.295 sys 0m1.468s 00:08:17.295 18:52:31 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.295 18:52:31 env -- common/autotest_common.sh@10 -- # set +x 00:08:17.295 ************************************ 00:08:17.295 END TEST env 00:08:17.295 ************************************ 00:08:17.295 18:52:31 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:08:17.295 18:52:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:17.295 18:52:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:17.295 18:52:31 -- common/autotest_common.sh@10 -- # set +x 00:08:17.295 ************************************ 00:08:17.295 START TEST rpc 00:08:17.295 ************************************ 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:08:17.295 * Looking for test storage... 00:08:17.295 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:17.295 18:52:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:17.295 18:52:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1567625 00:08:17.295 18:52:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.295 18:52:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1567625 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@830 -- # '[' -z 1567625 ']' 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:17.295 18:52:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.295 [2024-06-10 18:52:32.024715] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:17.295 [2024-06-10 18:52:32.024774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567625 ] 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:17.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.553 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:17.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.554 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:17.554 [2024-06-10 18:52:32.159153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.554 [2024-06-10 18:52:32.245800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:17.554 [2024-06-10 18:52:32.245847] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1567625' to capture a snapshot of events at runtime. 00:08:17.554 [2024-06-10 18:52:32.245860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.554 [2024-06-10 18:52:32.245872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.554 [2024-06-10 18:52:32.245881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1567625 for offline analysis/debug. 00:08:17.554 [2024-06-10 18:52:32.245917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.492 18:52:32 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:18.492 18:52:32 rpc -- common/autotest_common.sh@863 -- # return 0 00:08:18.492 18:52:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:18.492 18:52:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:18.492 18:52:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:18.492 18:52:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:18.492 18:52:32 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:18.492 18:52:32 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.492 18:52:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 ************************************ 00:08:18.492 START TEST rpc_integrity 00:08:18.492 ************************************ 00:08:18.492 18:52:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:08:18.492 18:52:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.492 18:52:32 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.492 18:52:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 18:52:32 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.492 18:52:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.492 18:52:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.492 { 00:08:18.492 "name": "Malloc0", 00:08:18.492 "aliases": [ 00:08:18.492 "488ade09-8227-473b-9009-039fae736b8b" 00:08:18.492 ], 00:08:18.492 "product_name": "Malloc disk", 00:08:18.492 "block_size": 512, 00:08:18.492 "num_blocks": 16384, 00:08:18.492 "uuid": "488ade09-8227-473b-9009-039fae736b8b", 00:08:18.492 "assigned_rate_limits": { 00:08:18.492 "rw_ios_per_sec": 0, 00:08:18.492 "rw_mbytes_per_sec": 0, 00:08:18.492 "r_mbytes_per_sec": 0, 00:08:18.492 "w_mbytes_per_sec": 0 00:08:18.492 }, 00:08:18.492 "claimed": false, 00:08:18.492 "zoned": false, 00:08:18.492 "supported_io_types": { 00:08:18.492 "read": true, 00:08:18.492 "write": true, 00:08:18.492 "unmap": true, 00:08:18.492 "write_zeroes": true, 00:08:18.492 "flush": true, 00:08:18.492 "reset": true, 00:08:18.492 "compare": false, 00:08:18.492 "compare_and_write": false, 00:08:18.492 "abort": true, 00:08:18.492 "nvme_admin": false, 00:08:18.492 "nvme_io": false 00:08:18.492 }, 00:08:18.492 "memory_domains": [ 00:08:18.492 { 00:08:18.492 "dma_device_id": "system", 00:08:18.492 "dma_device_type": 1 00:08:18.492 }, 00:08:18.492 { 00:08:18.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.492 "dma_device_type": 2 00:08:18.492 } 00:08:18.492 ], 00:08:18.492 "driver_specific": {} 00:08:18.492 } 00:08:18.492 ]' 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 [2024-06-10 18:52:33.098803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:18.492 [2024-06-10 18:52:33.098840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.492 [2024-06-10 18:52:33.098857] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd455c0 00:08:18.492 [2024-06-10 18:52:33.098869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.492 [2024-06-10 18:52:33.100310] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.492 [2024-06-10 18:52:33.100336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:18.492 Passthru0 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.492 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.492 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:18.492 { 00:08:18.492 "name": "Malloc0", 00:08:18.492 "aliases": [ 00:08:18.492 "488ade09-8227-473b-9009-039fae736b8b" 00:08:18.492 ], 00:08:18.492 "product_name": "Malloc disk", 00:08:18.492 "block_size": 512, 00:08:18.492 "num_blocks": 16384, 00:08:18.493 "uuid": "488ade09-8227-473b-9009-039fae736b8b", 00:08:18.493 "assigned_rate_limits": { 00:08:18.493 "rw_ios_per_sec": 0, 00:08:18.493 "rw_mbytes_per_sec": 0, 00:08:18.493 "r_mbytes_per_sec": 0, 00:08:18.493 "w_mbytes_per_sec": 0 00:08:18.493 }, 00:08:18.493 "claimed": true, 00:08:18.493 "claim_type": "exclusive_write", 00:08:18.493 "zoned": false, 00:08:18.493 "supported_io_types": { 00:08:18.493 "read": true, 00:08:18.493 "write": true, 00:08:18.493 "unmap": true, 00:08:18.493 "write_zeroes": true, 00:08:18.493 "flush": true, 00:08:18.493 "reset": true, 00:08:18.493 "compare": false, 00:08:18.493 "compare_and_write": false, 00:08:18.493 "abort": true, 00:08:18.493 "nvme_admin": false, 00:08:18.493 "nvme_io": false 00:08:18.493 }, 00:08:18.493 "memory_domains": [ 00:08:18.493 { 00:08:18.493 "dma_device_id": "system", 00:08:18.493 "dma_device_type": 1 00:08:18.493 }, 00:08:18.493 { 00:08:18.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.493 "dma_device_type": 2 00:08:18.493 } 00:08:18.493 ], 00:08:18.493 "driver_specific": {} 00:08:18.493 }, 00:08:18.493 { 00:08:18.493 "name": "Passthru0", 00:08:18.493 "aliases": [ 00:08:18.493 "3e05d56b-1a4d-586e-a070-b16cbd6a22be" 00:08:18.493 ], 00:08:18.493 "product_name": "passthru", 00:08:18.493 "block_size": 512, 00:08:18.493 "num_blocks": 16384, 00:08:18.493 "uuid": "3e05d56b-1a4d-586e-a070-b16cbd6a22be", 00:08:18.493 "assigned_rate_limits": { 00:08:18.493 "rw_ios_per_sec": 0, 00:08:18.493 "rw_mbytes_per_sec": 0, 00:08:18.493 "r_mbytes_per_sec": 0, 00:08:18.493 "w_mbytes_per_sec": 0 00:08:18.493 }, 00:08:18.493 "claimed": false, 00:08:18.493 "zoned": false, 00:08:18.493 "supported_io_types": { 00:08:18.493 "read": true, 00:08:18.493 "write": true, 00:08:18.493 "unmap": true, 00:08:18.493 "write_zeroes": true, 00:08:18.493 "flush": true, 00:08:18.493 "reset": true, 00:08:18.493 "compare": false, 00:08:18.493 "compare_and_write": false, 00:08:18.493 "abort": true, 00:08:18.493 "nvme_admin": false, 00:08:18.493 "nvme_io": false 00:08:18.493 }, 00:08:18.493 "memory_domains": [ 00:08:18.493 { 00:08:18.493 "dma_device_id": "system", 00:08:18.493 "dma_device_type": 1 00:08:18.493 }, 00:08:18.493 { 00:08:18.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.493 "dma_device_type": 2 00:08:18.493 } 00:08:18.493 ], 00:08:18.493 "driver_specific": { 00:08:18.493 "passthru": { 00:08:18.493 "name": "Passthru0", 00:08:18.493 "base_bdev_name": "Malloc0" 00:08:18.493 } 00:08:18.493 } 00:08:18.493 } 00:08:18.493 ]' 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:18.493 18:52:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:18.493 00:08:18.493 real 0m0.287s 00:08:18.493 user 0m0.179s 00:08:18.493 sys 0m0.048s 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.493 18:52:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:18.493 ************************************ 00:08:18.493 END TEST rpc_integrity 00:08:18.493 ************************************ 00:08:18.752 18:52:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:18.752 18:52:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:18.752 18:52:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.752 18:52:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.752 ************************************ 00:08:18.752 START TEST rpc_plugins 00:08:18.752 ************************************ 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:08:18.752 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.752 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:18.752 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.752 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.752 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:18.752 { 00:08:18.752 "name": "Malloc1", 00:08:18.752 "aliases": [ 00:08:18.752 "1e2f4079-606e-4fbf-a8e0-ea7c2ba852b8" 00:08:18.752 ], 00:08:18.753 "product_name": "Malloc disk", 00:08:18.753 "block_size": 4096, 00:08:18.753 "num_blocks": 256, 00:08:18.753 "uuid": "1e2f4079-606e-4fbf-a8e0-ea7c2ba852b8", 00:08:18.753 "assigned_rate_limits": { 00:08:18.753 "rw_ios_per_sec": 0, 00:08:18.753 "rw_mbytes_per_sec": 0, 00:08:18.753 "r_mbytes_per_sec": 0, 00:08:18.753 "w_mbytes_per_sec": 0 00:08:18.753 }, 00:08:18.753 "claimed": false, 00:08:18.753 "zoned": false, 00:08:18.753 "supported_io_types": { 00:08:18.753 "read": true, 00:08:18.753 "write": true, 00:08:18.753 "unmap": true, 00:08:18.753 "write_zeroes": true, 00:08:18.753 "flush": true, 00:08:18.753 "reset": true, 00:08:18.753 "compare": false, 00:08:18.753 "compare_and_write": false, 00:08:18.753 "abort": true, 00:08:18.753 "nvme_admin": false, 00:08:18.753 "nvme_io": false 00:08:18.753 }, 00:08:18.753 "memory_domains": [ 00:08:18.753 { 00:08:18.753 "dma_device_id": "system", 00:08:18.753 "dma_device_type": 1 00:08:18.753 }, 00:08:18.753 { 00:08:18.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.753 "dma_device_type": 2 00:08:18.753 } 00:08:18.753 ], 00:08:18.753 "driver_specific": {} 00:08:18.753 } 00:08:18.753 ]' 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:18.753 18:52:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:18.753 00:08:18.753 real 0m0.151s 00:08:18.753 user 0m0.091s 00:08:18.753 sys 0m0.026s 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.753 18:52:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:18.753 ************************************ 00:08:18.753 END TEST rpc_plugins 00:08:18.753 ************************************ 00:08:19.011 18:52:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:19.011 18:52:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:19.011 18:52:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.011 18:52:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.011 ************************************ 00:08:19.011 START TEST rpc_trace_cmd_test 00:08:19.011 ************************************ 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.011 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:19.011 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1567625", 00:08:19.011 "tpoint_group_mask": "0x8", 00:08:19.011 "iscsi_conn": { 00:08:19.011 "mask": "0x2", 00:08:19.011 "tpoint_mask": "0x0" 00:08:19.011 }, 00:08:19.011 "scsi": { 00:08:19.011 "mask": "0x4", 00:08:19.011 "tpoint_mask": "0x0" 00:08:19.011 }, 00:08:19.011 "bdev": { 00:08:19.011 "mask": "0x8", 00:08:19.011 "tpoint_mask": "0xffffffffffffffff" 00:08:19.011 }, 00:08:19.011 "nvmf_rdma": { 00:08:19.011 "mask": "0x10", 00:08:19.011 "tpoint_mask": "0x0" 00:08:19.011 }, 00:08:19.011 "nvmf_tcp": { 00:08:19.011 "mask": "0x20", 00:08:19.011 "tpoint_mask": "0x0" 00:08:19.011 }, 00:08:19.011 "ftl": { 00:08:19.011 "mask": "0x40", 00:08:19.011 "tpoint_mask": "0x0" 00:08:19.011 }, 00:08:19.011 "blobfs": { 00:08:19.012 "mask": "0x80", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "dsa": { 00:08:19.012 "mask": "0x200", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "thread": { 00:08:19.012 "mask": "0x400", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "nvme_pcie": { 00:08:19.012 "mask": "0x800", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "iaa": { 00:08:19.012 "mask": "0x1000", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "nvme_tcp": { 00:08:19.012 "mask": "0x2000", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "bdev_nvme": { 00:08:19.012 "mask": "0x4000", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 }, 00:08:19.012 "sock": { 00:08:19.012 "mask": "0x8000", 00:08:19.012 "tpoint_mask": "0x0" 00:08:19.012 } 00:08:19.012 }' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:19.012 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:19.271 18:52:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:19.271 00:08:19.271 real 0m0.246s 00:08:19.271 user 0m0.210s 00:08:19.271 sys 0m0.029s 00:08:19.271 18:52:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.271 18:52:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.271 ************************************ 00:08:19.271 END TEST rpc_trace_cmd_test 00:08:19.271 ************************************ 00:08:19.271 18:52:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:19.271 18:52:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:19.271 18:52:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:19.271 18:52:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:19.271 18:52:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.271 18:52:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.271 ************************************ 00:08:19.271 START TEST rpc_daemon_integrity 00:08:19.271 ************************************ 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:19.271 { 00:08:19.271 "name": "Malloc2", 00:08:19.271 "aliases": [ 00:08:19.271 "4506895d-9b7a-4d33-83a2-c8e32445d34f" 00:08:19.271 ], 00:08:19.271 "product_name": "Malloc disk", 00:08:19.271 "block_size": 512, 00:08:19.271 "num_blocks": 16384, 00:08:19.271 "uuid": "4506895d-9b7a-4d33-83a2-c8e32445d34f", 00:08:19.271 "assigned_rate_limits": { 00:08:19.271 "rw_ios_per_sec": 0, 00:08:19.271 "rw_mbytes_per_sec": 0, 00:08:19.271 "r_mbytes_per_sec": 0, 00:08:19.271 "w_mbytes_per_sec": 0 00:08:19.271 }, 00:08:19.271 "claimed": false, 00:08:19.271 "zoned": false, 00:08:19.271 "supported_io_types": { 00:08:19.271 "read": true, 00:08:19.271 "write": true, 00:08:19.271 "unmap": true, 00:08:19.271 "write_zeroes": true, 00:08:19.271 "flush": true, 00:08:19.271 "reset": true, 00:08:19.271 "compare": false, 00:08:19.271 "compare_and_write": false, 00:08:19.271 "abort": true, 00:08:19.271 "nvme_admin": false, 00:08:19.271 "nvme_io": false 00:08:19.271 }, 00:08:19.271 "memory_domains": [ 00:08:19.271 { 00:08:19.271 "dma_device_id": "system", 00:08:19.271 "dma_device_type": 1 00:08:19.271 }, 00:08:19.271 { 00:08:19.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.271 "dma_device_type": 2 00:08:19.271 } 00:08:19.271 ], 00:08:19.271 "driver_specific": {} 00:08:19.271 } 00:08:19.271 ]' 00:08:19.271 18:52:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:19.271 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:19.271 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:19.271 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.271 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.531 [2024-06-10 18:52:34.033441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:19.531 [2024-06-10 18:52:34.033475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.531 [2024-06-10 18:52:34.033491] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb94180 00:08:19.531 [2024-06-10 18:52:34.033502] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.531 [2024-06-10 18:52:34.034752] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.531 [2024-06-10 18:52:34.034778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:19.531 Passthru0 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:19.531 { 00:08:19.531 "name": "Malloc2", 00:08:19.531 "aliases": [ 00:08:19.531 "4506895d-9b7a-4d33-83a2-c8e32445d34f" 00:08:19.531 ], 00:08:19.531 "product_name": "Malloc disk", 00:08:19.531 "block_size": 512, 00:08:19.531 "num_blocks": 16384, 00:08:19.531 "uuid": "4506895d-9b7a-4d33-83a2-c8e32445d34f", 00:08:19.531 "assigned_rate_limits": { 00:08:19.531 "rw_ios_per_sec": 0, 00:08:19.531 "rw_mbytes_per_sec": 0, 00:08:19.531 "r_mbytes_per_sec": 0, 00:08:19.531 "w_mbytes_per_sec": 0 00:08:19.531 }, 00:08:19.531 "claimed": true, 00:08:19.531 "claim_type": "exclusive_write", 00:08:19.531 "zoned": false, 00:08:19.531 "supported_io_types": { 00:08:19.531 "read": true, 00:08:19.531 "write": true, 00:08:19.531 "unmap": true, 00:08:19.531 "write_zeroes": true, 00:08:19.531 "flush": true, 00:08:19.531 "reset": true, 00:08:19.531 "compare": false, 00:08:19.531 "compare_and_write": false, 00:08:19.531 "abort": true, 00:08:19.531 "nvme_admin": false, 00:08:19.531 "nvme_io": false 00:08:19.531 }, 00:08:19.531 "memory_domains": [ 00:08:19.531 { 00:08:19.531 "dma_device_id": "system", 00:08:19.531 "dma_device_type": 1 00:08:19.531 }, 00:08:19.531 { 00:08:19.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.531 "dma_device_type": 2 00:08:19.531 } 00:08:19.531 ], 00:08:19.531 "driver_specific": {} 00:08:19.531 }, 00:08:19.531 { 00:08:19.531 "name": "Passthru0", 00:08:19.531 "aliases": [ 00:08:19.531 "c2fa84d2-2410-5b61-9313-96a183a51e5d" 00:08:19.531 ], 00:08:19.531 "product_name": "passthru", 00:08:19.531 "block_size": 512, 00:08:19.531 "num_blocks": 16384, 00:08:19.531 "uuid": "c2fa84d2-2410-5b61-9313-96a183a51e5d", 00:08:19.531 "assigned_rate_limits": { 00:08:19.531 "rw_ios_per_sec": 0, 00:08:19.531 "rw_mbytes_per_sec": 0, 00:08:19.531 "r_mbytes_per_sec": 0, 00:08:19.531 "w_mbytes_per_sec": 0 00:08:19.531 }, 00:08:19.531 "claimed": false, 00:08:19.531 "zoned": false, 00:08:19.531 "supported_io_types": { 00:08:19.531 "read": true, 00:08:19.531 "write": true, 00:08:19.531 "unmap": true, 00:08:19.531 "write_zeroes": true, 00:08:19.531 "flush": true, 00:08:19.531 "reset": true, 00:08:19.531 "compare": false, 00:08:19.531 "compare_and_write": false, 00:08:19.531 "abort": true, 00:08:19.531 "nvme_admin": false, 00:08:19.531 "nvme_io": false 00:08:19.531 }, 00:08:19.531 "memory_domains": [ 00:08:19.531 { 00:08:19.531 "dma_device_id": "system", 00:08:19.531 "dma_device_type": 1 00:08:19.531 }, 00:08:19.531 { 00:08:19.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.531 "dma_device_type": 2 00:08:19.531 } 00:08:19.531 ], 00:08:19.531 "driver_specific": { 00:08:19.531 "passthru": { 00:08:19.531 "name": "Passthru0", 00:08:19.531 "base_bdev_name": "Malloc2" 00:08:19.531 } 00:08:19.531 } 00:08:19.531 } 00:08:19.531 ]' 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:19.531 00:08:19.531 real 0m0.304s 00:08:19.531 user 0m0.177s 00:08:19.531 sys 0m0.060s 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.531 18:52:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:19.531 ************************************ 00:08:19.531 END TEST rpc_daemon_integrity 00:08:19.531 ************************************ 00:08:19.531 18:52:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:19.531 18:52:34 rpc -- rpc/rpc.sh@84 -- # killprocess 1567625 00:08:19.531 18:52:34 rpc -- common/autotest_common.sh@949 -- # '[' -z 1567625 ']' 00:08:19.531 18:52:34 rpc -- common/autotest_common.sh@953 -- # kill -0 1567625 00:08:19.531 18:52:34 rpc -- common/autotest_common.sh@954 -- # uname 00:08:19.531 18:52:34 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:19.531 18:52:34 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1567625 00:08:19.791 18:52:34 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:19.791 18:52:34 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:19.791 18:52:34 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1567625' 00:08:19.791 killing process with pid 1567625 00:08:19.791 18:52:34 rpc -- common/autotest_common.sh@968 -- # kill 1567625 00:08:19.791 18:52:34 rpc -- common/autotest_common.sh@973 -- # wait 1567625 00:08:20.050 00:08:20.050 real 0m2.763s 00:08:20.050 user 0m3.508s 00:08:20.050 sys 0m0.910s 00:08:20.050 18:52:34 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:20.050 18:52:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.050 ************************************ 00:08:20.050 END TEST rpc 00:08:20.050 ************************************ 00:08:20.050 18:52:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:20.050 18:52:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:20.050 18:52:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:20.050 18:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:20.050 ************************************ 00:08:20.050 START TEST skip_rpc 00:08:20.050 ************************************ 00:08:20.050 18:52:34 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:20.050 * Looking for test storage... 00:08:20.310 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:20.310 18:52:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:20.310 18:52:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:08:20.310 18:52:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:20.310 18:52:34 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:20.310 18:52:34 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:20.310 18:52:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.310 ************************************ 00:08:20.310 START TEST skip_rpc 00:08:20.310 ************************************ 00:08:20.310 18:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:08:20.310 18:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1568328 00:08:20.310 18:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:20.310 18:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:20.310 18:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:20.310 [2024-06-10 18:52:34.919783] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:20.310 [2024-06-10 18:52:34.919842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568328 ] 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:20.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.310 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:20.310 [2024-06-10 18:52:35.054380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.570 [2024-06-10 18:52:35.138891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1568328 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1568328 ']' 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1568328 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1568328 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1568328' 00:08:25.842 killing process with pid 1568328 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1568328 00:08:25.842 18:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1568328 00:08:25.842 00:08:25.842 real 0m5.408s 00:08:25.842 user 0m5.094s 00:08:25.842 sys 0m0.347s 00:08:25.842 18:52:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:25.842 18:52:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.842 ************************************ 00:08:25.842 END TEST skip_rpc 00:08:25.842 ************************************ 00:08:25.842 18:52:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:25.842 18:52:40 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:25.842 18:52:40 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:25.842 18:52:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.842 ************************************ 00:08:25.842 START TEST skip_rpc_with_json 00:08:25.842 ************************************ 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1569165 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1569165 00:08:25.842 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1569165 ']' 00:08:25.843 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.843 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:25.843 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.843 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:25.843 18:52:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:25.843 [2024-06-10 18:52:40.408844] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:25.843 [2024-06-10 18:52:40.408890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569165 ] 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:25.843 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.843 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:25.843 [2024-06-10 18:52:40.527276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.102 [2024-06-10 18:52:40.607782] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.670 [2024-06-10 18:52:41.309549] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:26.670 request: 00:08:26.670 { 00:08:26.670 "trtype": "tcp", 00:08:26.670 "method": "nvmf_get_transports", 00:08:26.670 "req_id": 1 00:08:26.670 } 00:08:26.670 Got JSON-RPC error response 00:08:26.670 response: 00:08:26.670 { 00:08:26.670 "code": -19, 00:08:26.670 "message": "No such device" 00:08:26.670 } 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.670 [2024-06-10 18:52:41.321671] tcp.c: 724:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.670 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.929 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.929 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:26.929 { 00:08:26.929 "subsystems": [ 00:08:26.929 { 00:08:26.929 "subsystem": "keyring", 00:08:26.929 "config": [] 00:08:26.929 }, 00:08:26.929 { 00:08:26.929 "subsystem": "iobuf", 00:08:26.929 "config": [ 00:08:26.929 { 00:08:26.929 "method": "iobuf_set_options", 00:08:26.929 "params": { 00:08:26.929 "small_pool_count": 8192, 00:08:26.929 "large_pool_count": 1024, 00:08:26.929 "small_bufsize": 8192, 00:08:26.929 "large_bufsize": 135168 00:08:26.929 } 00:08:26.929 } 00:08:26.929 ] 00:08:26.929 }, 00:08:26.929 { 00:08:26.929 "subsystem": "sock", 00:08:26.929 "config": [ 00:08:26.929 { 00:08:26.929 "method": "sock_set_default_impl", 00:08:26.929 "params": { 00:08:26.929 "impl_name": "posix" 00:08:26.929 } 00:08:26.929 }, 00:08:26.929 { 00:08:26.929 "method": "sock_impl_set_options", 00:08:26.929 "params": { 00:08:26.929 "impl_name": "ssl", 00:08:26.929 "recv_buf_size": 4096, 00:08:26.929 "send_buf_size": 4096, 00:08:26.929 "enable_recv_pipe": true, 00:08:26.929 "enable_quickack": false, 00:08:26.929 "enable_placement_id": 0, 00:08:26.929 "enable_zerocopy_send_server": true, 00:08:26.929 "enable_zerocopy_send_client": false, 00:08:26.929 "zerocopy_threshold": 0, 00:08:26.929 "tls_version": 0, 00:08:26.929 "enable_ktls": false 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "sock_impl_set_options", 00:08:26.930 "params": { 00:08:26.930 "impl_name": "posix", 00:08:26.930 "recv_buf_size": 2097152, 00:08:26.930 "send_buf_size": 2097152, 00:08:26.930 "enable_recv_pipe": true, 00:08:26.930 "enable_quickack": false, 00:08:26.930 "enable_placement_id": 0, 00:08:26.930 "enable_zerocopy_send_server": true, 00:08:26.930 "enable_zerocopy_send_client": false, 00:08:26.930 "zerocopy_threshold": 0, 00:08:26.930 "tls_version": 0, 00:08:26.930 "enable_ktls": false 00:08:26.930 } 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "vmd", 00:08:26.930 "config": [] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "accel", 00:08:26.930 "config": [ 00:08:26.930 { 00:08:26.930 "method": "accel_set_options", 00:08:26.930 "params": { 00:08:26.930 "small_cache_size": 128, 00:08:26.930 "large_cache_size": 16, 00:08:26.930 "task_count": 2048, 00:08:26.930 "sequence_count": 2048, 00:08:26.930 "buf_count": 2048 00:08:26.930 } 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "bdev", 00:08:26.930 "config": [ 00:08:26.930 { 00:08:26.930 "method": "bdev_set_options", 00:08:26.930 "params": { 00:08:26.930 "bdev_io_pool_size": 65535, 00:08:26.930 "bdev_io_cache_size": 256, 00:08:26.930 "bdev_auto_examine": true, 00:08:26.930 "iobuf_small_cache_size": 128, 00:08:26.930 "iobuf_large_cache_size": 16 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "bdev_raid_set_options", 00:08:26.930 "params": { 00:08:26.930 "process_window_size_kb": 1024 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "bdev_iscsi_set_options", 00:08:26.930 "params": { 00:08:26.930 "timeout_sec": 30 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "bdev_nvme_set_options", 00:08:26.930 "params": { 00:08:26.930 "action_on_timeout": "none", 00:08:26.930 "timeout_us": 0, 00:08:26.930 "timeout_admin_us": 0, 00:08:26.930 "keep_alive_timeout_ms": 10000, 00:08:26.930 "arbitration_burst": 0, 00:08:26.930 "low_priority_weight": 0, 00:08:26.930 "medium_priority_weight": 0, 00:08:26.930 "high_priority_weight": 0, 00:08:26.930 "nvme_adminq_poll_period_us": 10000, 00:08:26.930 "nvme_ioq_poll_period_us": 0, 00:08:26.930 "io_queue_requests": 0, 00:08:26.930 "delay_cmd_submit": true, 00:08:26.930 "transport_retry_count": 4, 00:08:26.930 "bdev_retry_count": 3, 00:08:26.930 "transport_ack_timeout": 0, 00:08:26.930 "ctrlr_loss_timeout_sec": 0, 00:08:26.930 "reconnect_delay_sec": 0, 00:08:26.930 "fast_io_fail_timeout_sec": 0, 00:08:26.930 "disable_auto_failback": false, 00:08:26.930 "generate_uuids": false, 00:08:26.930 "transport_tos": 0, 00:08:26.930 "nvme_error_stat": false, 00:08:26.930 "rdma_srq_size": 0, 00:08:26.930 "io_path_stat": false, 00:08:26.930 "allow_accel_sequence": false, 00:08:26.930 "rdma_max_cq_size": 0, 00:08:26.930 "rdma_cm_event_timeout_ms": 0, 00:08:26.930 "dhchap_digests": [ 00:08:26.930 "sha256", 00:08:26.930 "sha384", 00:08:26.930 "sha512" 00:08:26.930 ], 00:08:26.930 "dhchap_dhgroups": [ 00:08:26.930 "null", 00:08:26.930 "ffdhe2048", 00:08:26.930 "ffdhe3072", 00:08:26.930 "ffdhe4096", 00:08:26.930 "ffdhe6144", 00:08:26.930 "ffdhe8192" 00:08:26.930 ] 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "bdev_nvme_set_hotplug", 00:08:26.930 "params": { 00:08:26.930 "period_us": 100000, 00:08:26.930 "enable": false 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "bdev_wait_for_examine" 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "scsi", 00:08:26.930 "config": null 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "scheduler", 00:08:26.930 "config": [ 00:08:26.930 { 00:08:26.930 "method": "framework_set_scheduler", 00:08:26.930 "params": { 00:08:26.930 "name": "static" 00:08:26.930 } 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "vhost_scsi", 00:08:26.930 "config": [] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "vhost_blk", 00:08:26.930 "config": [] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "ublk", 00:08:26.930 "config": [] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "nbd", 00:08:26.930 "config": [] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "nvmf", 00:08:26.930 "config": [ 00:08:26.930 { 00:08:26.930 "method": "nvmf_set_config", 00:08:26.930 "params": { 00:08:26.930 "discovery_filter": "match_any", 00:08:26.930 "admin_cmd_passthru": { 00:08:26.930 "identify_ctrlr": false 00:08:26.930 } 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "nvmf_set_max_subsystems", 00:08:26.930 "params": { 00:08:26.930 "max_subsystems": 1024 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "nvmf_set_crdt", 00:08:26.930 "params": { 00:08:26.930 "crdt1": 0, 00:08:26.930 "crdt2": 0, 00:08:26.930 "crdt3": 0 00:08:26.930 } 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "method": "nvmf_create_transport", 00:08:26.930 "params": { 00:08:26.930 "trtype": "TCP", 00:08:26.930 "max_queue_depth": 128, 00:08:26.930 "max_io_qpairs_per_ctrlr": 127, 00:08:26.930 "in_capsule_data_size": 4096, 00:08:26.930 "max_io_size": 131072, 00:08:26.930 "io_unit_size": 131072, 00:08:26.930 "max_aq_depth": 128, 00:08:26.930 "num_shared_buffers": 511, 00:08:26.930 "buf_cache_size": 4294967295, 00:08:26.930 "dif_insert_or_strip": false, 00:08:26.930 "zcopy": false, 00:08:26.930 "c2h_success": true, 00:08:26.930 "sock_priority": 0, 00:08:26.930 "abort_timeout_sec": 1, 00:08:26.930 "ack_timeout": 0, 00:08:26.930 "data_wr_pool_size": 0 00:08:26.930 } 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 }, 00:08:26.930 { 00:08:26.930 "subsystem": "iscsi", 00:08:26.930 "config": [ 00:08:26.930 { 00:08:26.930 "method": "iscsi_set_options", 00:08:26.930 "params": { 00:08:26.930 "node_base": "iqn.2016-06.io.spdk", 00:08:26.930 "max_sessions": 128, 00:08:26.930 "max_connections_per_session": 2, 00:08:26.930 "max_queue_depth": 64, 00:08:26.930 "default_time2wait": 2, 00:08:26.930 "default_time2retain": 20, 00:08:26.930 "first_burst_length": 8192, 00:08:26.930 "immediate_data": true, 00:08:26.930 "allow_duplicated_isid": false, 00:08:26.930 "error_recovery_level": 0, 00:08:26.930 "nop_timeout": 60, 00:08:26.930 "nop_in_interval": 30, 00:08:26.930 "disable_chap": false, 00:08:26.930 "require_chap": false, 00:08:26.930 "mutual_chap": false, 00:08:26.930 "chap_group": 0, 00:08:26.930 "max_large_datain_per_connection": 64, 00:08:26.930 "max_r2t_per_connection": 4, 00:08:26.930 "pdu_pool_size": 36864, 00:08:26.930 "immediate_data_pool_size": 16384, 00:08:26.930 "data_out_pool_size": 2048 00:08:26.930 } 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 } 00:08:26.930 ] 00:08:26.930 } 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1569165 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1569165 ']' 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1569165 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1569165 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1569165' 00:08:26.930 killing process with pid 1569165 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1569165 00:08:26.930 18:52:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1569165 00:08:27.189 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1569445 00:08:27.189 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:27.189 18:52:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1569445 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1569445 ']' 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1569445 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1569445 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1569445' 00:08:32.462 killing process with pid 1569445 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1569445 00:08:32.462 18:52:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1569445 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:08:32.722 00:08:32.722 real 0m6.937s 00:08:32.722 user 0m6.662s 00:08:32.722 sys 0m0.807s 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:32.722 ************************************ 00:08:32.722 END TEST skip_rpc_with_json 00:08:32.722 ************************************ 00:08:32.722 18:52:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:32.722 18:52:47 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:32.722 18:52:47 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:32.722 18:52:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.722 ************************************ 00:08:32.722 START TEST skip_rpc_with_delay 00:08:32.722 ************************************ 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:32.722 [2024-06-10 18:52:47.432813] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:32.722 [2024-06-10 18:52:47.432894] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:32.722 00:08:32.722 real 0m0.089s 00:08:32.722 user 0m0.051s 00:08:32.722 sys 0m0.037s 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:32.722 18:52:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:32.722 ************************************ 00:08:32.722 END TEST skip_rpc_with_delay 00:08:32.722 ************************************ 00:08:32.981 18:52:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:32.981 18:52:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:32.981 18:52:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:32.982 18:52:47 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:32.982 18:52:47 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:32.982 18:52:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.982 ************************************ 00:08:32.982 START TEST exit_on_failed_rpc_init 00:08:32.982 ************************************ 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1570552 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1570552 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1570552 ']' 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:32.982 18:52:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:32.982 [2024-06-10 18:52:47.599213] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:32.982 [2024-06-10 18:52:47.599270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570552 ] 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:32.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.982 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:32.982 [2024-06-10 18:52:47.733043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.242 [2024-06-10 18:52:47.820455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:33.809 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:33.809 [2024-06-10 18:52:48.550033] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:33.809 [2024-06-10 18:52:48.550097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570677 ] 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:34.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.110 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:34.110 [2024-06-10 18:52:48.675753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.110 [2024-06-10 18:52:48.762290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.110 [2024-06-10 18:52:48.762373] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:34.110 [2024-06-10 18:52:48.762393] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:34.110 [2024-06-10 18:52:48.762408] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1570552 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1570552 ']' 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1570552 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:34.110 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1570552 00:08:34.369 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:34.369 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:34.369 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1570552' 00:08:34.369 killing process with pid 1570552 00:08:34.369 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1570552 00:08:34.369 18:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1570552 00:08:34.628 00:08:34.628 real 0m1.701s 00:08:34.628 user 0m1.960s 00:08:34.628 sys 0m0.570s 00:08:34.628 18:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.628 18:52:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:34.628 ************************************ 00:08:34.628 END TEST exit_on_failed_rpc_init 00:08:34.628 ************************************ 00:08:34.628 18:52:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:34.628 00:08:34.628 real 0m14.574s 00:08:34.628 user 0m13.923s 00:08:34.628 sys 0m2.077s 00:08:34.628 18:52:49 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.628 18:52:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.628 ************************************ 00:08:34.628 END TEST skip_rpc 00:08:34.628 ************************************ 00:08:34.628 18:52:49 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:34.628 18:52:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:34.628 18:52:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:34.628 18:52:49 -- common/autotest_common.sh@10 -- # set +x 00:08:34.628 ************************************ 00:08:34.628 START TEST rpc_client 00:08:34.628 ************************************ 00:08:34.628 18:52:49 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:34.887 * Looking for test storage... 00:08:34.887 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:08:34.887 18:52:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:34.887 OK 00:08:34.887 18:52:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:34.887 00:08:34.887 real 0m0.138s 00:08:34.887 user 0m0.066s 00:08:34.887 sys 0m0.082s 00:08:34.887 18:52:49 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.887 18:52:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:34.887 ************************************ 00:08:34.887 END TEST rpc_client 00:08:34.887 ************************************ 00:08:34.887 18:52:49 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:08:34.887 18:52:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:34.887 18:52:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:34.887 18:52:49 -- common/autotest_common.sh@10 -- # set +x 00:08:34.887 ************************************ 00:08:34.887 START TEST json_config 00:08:34.887 ************************************ 00:08:34.887 18:52:49 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:08:35.146 18:52:49 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.146 18:52:49 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.146 18:52:49 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.146 18:52:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.146 18:52:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.146 18:52:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.146 18:52:49 json_config -- paths/export.sh@5 -- # export PATH 00:08:35.146 18:52:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@47 -- # : 0 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.146 18:52:49 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:35.146 18:52:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:08:35.147 INFO: JSON configuration test init 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.147 18:52:49 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:08:35.147 18:52:49 json_config -- json_config/common.sh@9 -- # local app=target 00:08:35.147 18:52:49 json_config -- json_config/common.sh@10 -- # shift 00:08:35.147 18:52:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:35.147 18:52:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:35.147 18:52:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:35.147 18:52:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.147 18:52:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.147 18:52:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1570940 00:08:35.147 18:52:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:35.147 Waiting for target to run... 00:08:35.147 18:52:49 json_config -- json_config/common.sh@25 -- # waitforlisten 1570940 /var/tmp/spdk_tgt.sock 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@830 -- # '[' -z 1570940 ']' 00:08:35.147 18:52:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:35.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:35.147 18:52:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.147 [2024-06-10 18:52:49.763758] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:35.147 [2024-06-10 18:52:49.763820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570940 ] 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:35.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.406 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:35.406 [2024-06-10 18:52:50.144390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.665 [2024-06-10 18:52:50.227922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.924 18:52:50 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:35.924 18:52:50 json_config -- common/autotest_common.sh@863 -- # return 0 00:08:35.924 18:52:50 json_config -- json_config/common.sh@26 -- # echo '' 00:08:35.924 00:08:35.924 18:52:50 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:08:35.924 18:52:50 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:08:35.924 18:52:50 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:35.924 18:52:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.924 18:52:50 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:08:35.924 18:52:50 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:08:35.924 18:52:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:08:36.182 18:52:50 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:08:36.182 18:52:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:08:36.440 [2024-06-10 18:52:51.090510] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:08:36.440 18:52:51 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:08:36.440 18:52:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:08:36.698 [2024-06-10 18:52:51.319094] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:08:36.698 18:52:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:08:36.698 18:52:51 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:36.698 18:52:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:36.698 18:52:51 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:36.698 18:52:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:08:36.698 18:52:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:36.956 [2024-06-10 18:52:51.608134] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:42.232 18:52:56 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:42.232 18:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:42.232 18:52:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:08:42.232 18:52:56 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:42.232 18:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@55 -- # return 0 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:08:42.232 18:52:56 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:42.232 18:52:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:08:42.232 18:52:56 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:42.232 18:52:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:42.491 18:52:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:08:42.491 18:52:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:42.492 18:52:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:42.492 18:52:57 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:08:42.492 18:52:57 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:08:42.492 18:52:57 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:42.492 18:52:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:42.750 Nvme0n1p0 Nvme0n1p1 00:08:42.750 18:52:57 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:42.750 18:52:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:43.009 [2024-06-10 18:52:57.645726] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:43.009 [2024-06-10 18:52:57.645775] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:43.009 00:08:43.009 18:52:57 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:43.009 18:52:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:43.267 Malloc3 00:08:43.267 18:52:57 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:43.267 18:52:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:43.527 [2024-06-10 18:52:58.094976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:43.527 [2024-06-10 18:52:58.095020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.527 [2024-06-10 18:52:58.095039] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xba8aa0 00:08:43.527 [2024-06-10 18:52:58.095051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.527 [2024-06-10 18:52:58.096447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.527 [2024-06-10 18:52:58.096474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:43.527 PTBdevFromMalloc3 00:08:43.527 18:52:58 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:43.527 18:52:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:43.786 Null0 00:08:43.786 18:52:58 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:43.786 18:52:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:44.044 Malloc0 00:08:44.044 18:52:58 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:44.044 18:52:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:44.044 Malloc1 00:08:44.044 18:52:58 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:44.044 18:52:58 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:44.612 102400+0 records in 00:08:44.612 102400+0 records out 00:08:44.612 104857600 bytes (105 MB, 100 MiB) copied, 0.286067 s, 367 MB/s 00:08:44.612 18:52:59 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:44.612 18:52:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:44.612 aio_disk 00:08:44.612 18:52:59 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:44.612 18:52:59 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:44.612 18:52:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:47.973 1b34b452-68e8-4abd-ba05-572667bf3b07 00:08:47.973 18:53:02 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:47.973 18:53:02 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:47.973 18:53:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:48.231 18:53:02 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:48.231 18:53:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:48.490 18:53:03 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:48.490 18:53:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:48.748 18:53:03 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:48.748 18:53:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:49.005 18:53:03 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:08:49.005 18:53:03 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:08:49.005 18:53:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:08:49.263 MallocForCryptoBdev 00:08:49.263 18:53:03 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:08:49.263 18:53:03 json_config -- json_config/json_config.sh@159 -- # wc -l 00:08:49.521 18:53:04 json_config -- json_config/json_config.sh@159 -- # [[ 5 -eq 0 ]] 00:08:49.521 18:53:04 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:08:49.521 18:53:04 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:08:49.521 18:53:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:08:49.779 [2024-06-10 18:53:04.303139] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:08:49.779 CryptoMallocBdev 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:48849a78-06e1-4245-b0fc-1cbeb0c25cf6 bdev_register:15ea8078-3d2b-468a-95d6-6a3ee8fc937a bdev_register:5a43816f-d423-4cbc-aeb8-1596fa692f60 bdev_register:67dd1d86-fe49-4fa7-8996-6b527edf4d8f bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:48849a78-06e1-4245-b0fc-1cbeb0c25cf6 bdev_register:15ea8078-3d2b-468a-95d6-6a3ee8fc937a bdev_register:5a43816f-d423-4cbc-aeb8-1596fa692f60 bdev_register:67dd1d86-fe49-4fa7-8996-6b527edf4d8f bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@71 -- # sort 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@72 -- # sort 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:08:49.779 18:53:04 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:49.779 18:53:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:48849a78-06e1-4245-b0fc-1cbeb0c25cf6 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:15ea8078-3d2b-468a-95d6-6a3ee8fc937a 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:5a43816f-d423-4cbc-aeb8-1596fa692f60 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:67dd1d86-fe49-4fa7-8996-6b527edf4d8f 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:08:50.037 18:53:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:15ea8078-3d2b-468a-95d6-6a3ee8fc937a bdev_register:48849a78-06e1-4245-b0fc-1cbeb0c25cf6 bdev_register:5a43816f-d423-4cbc-aeb8-1596fa692f60 bdev_register:67dd1d86-fe49-4fa7-8996-6b527edf4d8f bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\5\e\a\8\0\7\8\-\3\d\2\b\-\4\6\8\a\-\9\5\d\6\-\6\a\3\e\e\8\f\c\9\3\7\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\8\8\4\9\a\7\8\-\0\6\e\1\-\4\2\4\5\-\b\0\f\c\-\1\c\b\e\b\0\c\2\5\c\f\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\a\4\3\8\1\6\f\-\d\4\2\3\-\4\c\b\c\-\a\e\b\8\-\1\5\9\6\f\a\6\9\2\f\6\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\7\d\d\1\d\8\6\-\f\e\4\9\-\4\f\a\7\-\8\9\9\6\-\6\b\5\2\7\e\d\f\4\d\8\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@86 -- # cat 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:15ea8078-3d2b-468a-95d6-6a3ee8fc937a bdev_register:48849a78-06e1-4245-b0fc-1cbeb0c25cf6 bdev_register:5a43816f-d423-4cbc-aeb8-1596fa692f60 bdev_register:67dd1d86-fe49-4fa7-8996-6b527edf4d8f bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:08:50.038 Expected events matched: 00:08:50.038 bdev_register:15ea8078-3d2b-468a-95d6-6a3ee8fc937a 00:08:50.038 bdev_register:48849a78-06e1-4245-b0fc-1cbeb0c25cf6 00:08:50.038 bdev_register:5a43816f-d423-4cbc-aeb8-1596fa692f60 00:08:50.038 bdev_register:67dd1d86-fe49-4fa7-8996-6b527edf4d8f 00:08:50.038 bdev_register:aio_disk 00:08:50.038 bdev_register:CryptoMallocBdev 00:08:50.038 bdev_register:Malloc0 00:08:50.038 bdev_register:Malloc0p0 00:08:50.038 bdev_register:Malloc0p1 00:08:50.038 bdev_register:Malloc0p2 00:08:50.038 bdev_register:Malloc1 00:08:50.038 bdev_register:Malloc3 00:08:50.038 bdev_register:MallocForCryptoBdev 00:08:50.038 bdev_register:Null0 00:08:50.038 bdev_register:Nvme0n1 00:08:50.038 bdev_register:Nvme0n1p0 00:08:50.038 bdev_register:Nvme0n1p1 00:08:50.038 bdev_register:PTBdevFromMalloc3 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:08:50.038 18:53:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:50.038 18:53:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:08:50.038 18:53:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:50.038 18:53:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:08:50.038 18:53:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:50.038 18:53:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:50.295 MallocBdevForConfigChangeCheck 00:08:50.295 18:53:04 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:08:50.295 18:53:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:50.295 18:53:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.295 18:53:04 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:08:50.295 18:53:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:50.553 18:53:05 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:08:50.553 INFO: shutting down applications... 00:08:50.553 18:53:05 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:08:50.553 18:53:05 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:08:50.553 18:53:05 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:08:50.553 18:53:05 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:50.811 [2024-06-10 18:53:05.430608] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:52.712 Calling clear_iscsi_subsystem 00:08:52.712 Calling clear_nvmf_subsystem 00:08:52.712 Calling clear_nbd_subsystem 00:08:52.712 Calling clear_ublk_subsystem 00:08:52.712 Calling clear_vhost_blk_subsystem 00:08:52.712 Calling clear_vhost_scsi_subsystem 00:08:52.712 Calling clear_bdev_subsystem 00:08:52.712 18:53:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:08:52.712 18:53:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:08:52.712 18:53:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:08:52.712 18:53:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:52.712 18:53:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:52.712 18:53:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:53.279 18:53:07 json_config -- json_config/json_config.sh@345 -- # break 00:08:53.279 18:53:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:08:53.279 18:53:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:08:53.279 18:53:07 json_config -- json_config/common.sh@31 -- # local app=target 00:08:53.279 18:53:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:53.279 18:53:07 json_config -- json_config/common.sh@35 -- # [[ -n 1570940 ]] 00:08:53.279 18:53:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1570940 00:08:53.279 18:53:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:53.279 18:53:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:53.279 18:53:07 json_config -- json_config/common.sh@41 -- # kill -0 1570940 00:08:53.279 18:53:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:53.846 18:53:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:53.846 18:53:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:53.846 18:53:08 json_config -- json_config/common.sh@41 -- # kill -0 1570940 00:08:53.846 18:53:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:53.846 18:53:08 json_config -- json_config/common.sh@43 -- # break 00:08:53.846 18:53:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:53.846 18:53:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:53.846 SPDK target shutdown done 00:08:53.846 18:53:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:08:53.846 INFO: relaunching applications... 00:08:53.846 18:53:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:08:53.846 18:53:08 json_config -- json_config/common.sh@9 -- # local app=target 00:08:53.846 18:53:08 json_config -- json_config/common.sh@10 -- # shift 00:08:53.846 18:53:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:53.846 18:53:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:53.846 18:53:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:53.846 18:53:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:53.846 18:53:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:53.846 18:53:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1574301 00:08:53.847 18:53:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:08:53.847 18:53:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:53.847 Waiting for target to run... 00:08:53.847 18:53:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1574301 /var/tmp/spdk_tgt.sock 00:08:53.847 18:53:08 json_config -- common/autotest_common.sh@830 -- # '[' -z 1574301 ']' 00:08:53.847 18:53:08 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:53.847 18:53:08 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:53.847 18:53:08 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:53.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:53.847 18:53:08 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:53.847 18:53:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:53.847 [2024-06-10 18:53:08.381662] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:08:53.847 [2024-06-10 18:53:08.381728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574301 ] 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.0 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.1 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.2 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.3 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.4 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.5 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.6 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:01.7 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.0 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.1 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.2 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.3 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.4 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.5 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.6 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b6:02.7 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.0 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.1 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.2 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.3 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.4 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.5 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.6 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:01.7 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.0 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.1 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.2 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.3 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.4 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.5 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.6 cannot be used 00:08:54.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.106 EAL: Requested device 0000:b8:02.7 cannot be used 00:08:54.364 [2024-06-10 18:53:08.899331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.364 [2024-06-10 18:53:09.000600] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.364 [2024-06-10 18:53:09.054558] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:08:54.364 [2024-06-10 18:53:09.062602] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:08:54.364 [2024-06-10 18:53:09.070619] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:08:54.622 [2024-06-10 18:53:09.151394] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:08:57.152 [2024-06-10 18:53:11.321229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:57.152 [2024-06-10 18:53:11.321290] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:57.152 [2024-06-10 18:53:11.321304] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:57.152 [2024-06-10 18:53:11.329245] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:57.152 [2024-06-10 18:53:11.329269] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:57.152 [2024-06-10 18:53:11.337259] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:57.152 [2024-06-10 18:53:11.337281] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:57.152 [2024-06-10 18:53:11.345293] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:08:57.152 [2024-06-10 18:53:11.345317] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:08:57.152 [2024-06-10 18:53:11.345328] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:59.683 [2024-06-10 18:53:14.248377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:59.683 [2024-06-10 18:53:14.248419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.683 [2024-06-10 18:53:14.248435] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b74190 00:08:59.683 [2024-06-10 18:53:14.248447] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.683 [2024-06-10 18:53:14.248724] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.683 [2024-06-10 18:53:14.248741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:59.941 18:53:14 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:59.941 18:53:14 json_config -- common/autotest_common.sh@863 -- # return 0 00:08:59.941 18:53:14 json_config -- json_config/common.sh@26 -- # echo '' 00:08:59.941 00:08:59.941 18:53:14 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:08:59.941 18:53:14 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:59.941 INFO: Checking if target configuration is the same... 00:08:59.941 18:53:14 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.941 18:53:14 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:08:59.941 18:53:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:59.941 + '[' 2 -ne 2 ']' 00:08:59.941 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:59.941 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:08:59.941 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:59.941 +++ basename /dev/fd/62 00:08:59.941 ++ mktemp /tmp/62.XXX 00:08:59.941 + tmp_file_1=/tmp/62.96M 00:08:59.941 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:08:59.941 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:59.941 + tmp_file_2=/tmp/spdk_tgt_config.json.8iP 00:08:59.941 + ret=0 00:08:59.941 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:00.199 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:00.199 + diff -u /tmp/62.96M /tmp/spdk_tgt_config.json.8iP 00:09:00.199 + echo 'INFO: JSON config files are the same' 00:09:00.199 INFO: JSON config files are the same 00:09:00.199 + rm /tmp/62.96M /tmp/spdk_tgt_config.json.8iP 00:09:00.199 + exit 0 00:09:00.199 18:53:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:09:00.199 18:53:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:00.199 INFO: changing configuration and checking if this can be detected... 00:09:00.199 18:53:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:00.199 18:53:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:00.457 18:53:15 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:09:00.457 18:53:15 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:00.457 18:53:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:00.457 + '[' 2 -ne 2 ']' 00:09:00.457 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:00.457 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:09:00.457 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:00.457 +++ basename /dev/fd/62 00:09:00.457 ++ mktemp /tmp/62.XXX 00:09:00.457 + tmp_file_1=/tmp/62.nJd 00:09:00.457 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:00.457 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:00.457 + tmp_file_2=/tmp/spdk_tgt_config.json.itk 00:09:00.457 + ret=0 00:09:00.457 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:01.024 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:01.024 + diff -u /tmp/62.nJd /tmp/spdk_tgt_config.json.itk 00:09:01.024 + ret=1 00:09:01.024 + echo '=== Start of file: /tmp/62.nJd ===' 00:09:01.024 + cat /tmp/62.nJd 00:09:01.024 + echo '=== End of file: /tmp/62.nJd ===' 00:09:01.024 + echo '' 00:09:01.024 + echo '=== Start of file: /tmp/spdk_tgt_config.json.itk ===' 00:09:01.024 + cat /tmp/spdk_tgt_config.json.itk 00:09:01.024 + echo '=== End of file: /tmp/spdk_tgt_config.json.itk ===' 00:09:01.024 + echo '' 00:09:01.024 + rm /tmp/62.nJd /tmp/spdk_tgt_config.json.itk 00:09:01.024 + exit 1 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:09:01.024 INFO: configuration change detected. 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:09:01.024 18:53:15 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:01.024 18:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@317 -- # [[ -n 1574301 ]] 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:09:01.024 18:53:15 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:01.024 18:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:09:01.024 18:53:15 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:01.024 18:53:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:01.282 18:53:15 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:01.282 18:53:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:01.282 18:53:16 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:01.282 18:53:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:01.539 18:53:16 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:01.539 18:53:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:01.796 18:53:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:09:01.796 18:53:16 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:09:01.796 18:53:16 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:09:01.796 18:53:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:09:01.796 18:53:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.796 18:53:16 json_config -- json_config/json_config.sh@323 -- # killprocess 1574301 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@949 -- # '[' -z 1574301 ']' 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@953 -- # kill -0 1574301 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@954 -- # uname 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:01.796 18:53:16 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1574301 00:09:02.053 18:53:16 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:02.053 18:53:16 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:02.053 18:53:16 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1574301' 00:09:02.053 killing process with pid 1574301 00:09:02.053 18:53:16 json_config -- common/autotest_common.sh@968 -- # kill 1574301 00:09:02.053 18:53:16 json_config -- common/autotest_common.sh@973 -- # wait 1574301 00:09:04.580 18:53:18 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:04.580 18:53:18 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:09:04.580 18:53:18 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:04.580 18:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.580 18:53:18 json_config -- json_config/json_config.sh@328 -- # return 0 00:09:04.580 18:53:18 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:09:04.580 INFO: Success 00:09:04.580 00:09:04.580 real 0m29.335s 00:09:04.580 user 0m33.990s 00:09:04.580 sys 0m3.972s 00:09:04.580 18:53:18 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:04.580 18:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.580 ************************************ 00:09:04.580 END TEST json_config 00:09:04.580 ************************************ 00:09:04.580 18:53:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:04.580 18:53:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:04.580 18:53:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:04.580 18:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.580 ************************************ 00:09:04.580 START TEST json_config_extra_key 00:09:04.580 ************************************ 00:09:04.580 18:53:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:04.580 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.580 18:53:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:09:04.581 18:53:19 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.581 18:53:19 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.581 18:53:19 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.581 18:53:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.581 18:53:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.581 18:53:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.581 18:53:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:04.581 18:53:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.581 18:53:19 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:04.581 INFO: launching applications... 00:09:04.581 18:53:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1576336 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:04.581 Waiting for target to run... 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1576336 /var/tmp/spdk_tgt.sock 00:09:04.581 18:53:19 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1576336 ']' 00:09:04.581 18:53:19 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:09:04.581 18:53:19 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:04.581 18:53:19 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:04.581 18:53:19 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:04.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:04.581 18:53:19 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:04.581 18:53:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:04.581 [2024-06-10 18:53:19.172432] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:04.581 [2024-06-10 18:53:19.172500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576336 ] 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:04.840 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.840 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:04.840 [2024-06-10 18:53:19.560081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.098 [2024-06-10 18:53:19.636994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.356 18:53:20 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:05.356 18:53:20 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:05.356 00:09:05.356 18:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:05.356 INFO: shutting down applications... 00:09:05.356 18:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1576336 ]] 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1576336 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1576336 00:09:05.356 18:53:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1576336 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:05.923 18:53:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:05.923 SPDK target shutdown done 00:09:05.923 18:53:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:05.923 Success 00:09:05.923 00:09:05.923 real 0m1.575s 00:09:05.923 user 0m1.201s 00:09:05.923 sys 0m0.485s 00:09:05.923 18:53:20 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:05.923 18:53:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:05.923 ************************************ 00:09:05.923 END TEST json_config_extra_key 00:09:05.923 ************************************ 00:09:05.923 18:53:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:05.923 18:53:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:05.924 18:53:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:05.924 18:53:20 -- common/autotest_common.sh@10 -- # set +x 00:09:05.924 ************************************ 00:09:05.924 START TEST alias_rpc 00:09:05.924 ************************************ 00:09:05.924 18:53:20 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:06.182 * Looking for test storage... 00:09:06.182 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:09:06.182 18:53:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:06.182 18:53:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1576730 00:09:06.182 18:53:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1576730 00:09:06.182 18:53:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:06.182 18:53:20 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1576730 ']' 00:09:06.182 18:53:20 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.182 18:53:20 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:06.182 18:53:20 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.182 18:53:20 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:06.182 18:53:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.182 [2024-06-10 18:53:20.821666] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:06.182 [2024-06-10 18:53:20.821733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576730 ] 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.182 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:06.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.183 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:06.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.183 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:06.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.183 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:06.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.183 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:06.441 [2024-06-10 18:53:20.947168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.441 [2024-06-10 18:53:21.033838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.006 18:53:21 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:07.006 18:53:21 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:07.006 18:53:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:07.263 18:53:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1576730 00:09:07.263 18:53:21 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1576730 ']' 00:09:07.263 18:53:21 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1576730 00:09:07.263 18:53:21 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:09:07.263 18:53:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:07.263 18:53:21 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1576730 00:09:07.520 18:53:22 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:07.520 18:53:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:07.520 18:53:22 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1576730' 00:09:07.520 killing process with pid 1576730 00:09:07.520 18:53:22 alias_rpc -- common/autotest_common.sh@968 -- # kill 1576730 00:09:07.520 18:53:22 alias_rpc -- common/autotest_common.sh@973 -- # wait 1576730 00:09:07.777 00:09:07.777 real 0m1.738s 00:09:07.777 user 0m1.904s 00:09:07.777 sys 0m0.564s 00:09:07.777 18:53:22 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:07.777 18:53:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.777 ************************************ 00:09:07.777 END TEST alias_rpc 00:09:07.777 ************************************ 00:09:07.777 18:53:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:07.777 18:53:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:07.777 18:53:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:07.777 18:53:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:07.777 18:53:22 -- common/autotest_common.sh@10 -- # set +x 00:09:07.777 ************************************ 00:09:07.777 START TEST spdkcli_tcp 00:09:07.777 ************************************ 00:09:07.777 18:53:22 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:08.034 * Looking for test storage... 00:09:08.034 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1577174 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1577174 00:09:08.034 18:53:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1577174 ']' 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:08.034 18:53:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.034 [2024-06-10 18:53:22.639364] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:08.034 [2024-06-10 18:53:22.639423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577174 ] 00:09:08.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.034 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:08.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.034 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:08.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.034 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:08.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:08.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.035 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:08.035 [2024-06-10 18:53:22.774409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.292 [2024-06-10 18:53:22.864328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.292 [2024-06-10 18:53:22.864334] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.855 18:53:23 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:08.855 18:53:23 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:09:08.855 18:53:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1577192 00:09:08.855 18:53:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:08.855 18:53:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:09.112 [ 00:09:09.112 "bdev_malloc_delete", 00:09:09.112 "bdev_malloc_create", 00:09:09.112 "bdev_null_resize", 00:09:09.112 "bdev_null_delete", 00:09:09.112 "bdev_null_create", 00:09:09.112 "bdev_nvme_cuse_unregister", 00:09:09.112 "bdev_nvme_cuse_register", 00:09:09.112 "bdev_opal_new_user", 00:09:09.112 "bdev_opal_set_lock_state", 00:09:09.112 "bdev_opal_delete", 00:09:09.112 "bdev_opal_get_info", 00:09:09.112 "bdev_opal_create", 00:09:09.112 "bdev_nvme_opal_revert", 00:09:09.112 "bdev_nvme_opal_init", 00:09:09.112 "bdev_nvme_send_cmd", 00:09:09.112 "bdev_nvme_get_path_iostat", 00:09:09.112 "bdev_nvme_get_mdns_discovery_info", 00:09:09.112 "bdev_nvme_stop_mdns_discovery", 00:09:09.112 "bdev_nvme_start_mdns_discovery", 00:09:09.112 "bdev_nvme_set_multipath_policy", 00:09:09.112 "bdev_nvme_set_preferred_path", 00:09:09.112 "bdev_nvme_get_io_paths", 00:09:09.112 "bdev_nvme_remove_error_injection", 00:09:09.112 "bdev_nvme_add_error_injection", 00:09:09.112 "bdev_nvme_get_discovery_info", 00:09:09.112 "bdev_nvme_stop_discovery", 00:09:09.113 "bdev_nvme_start_discovery", 00:09:09.113 "bdev_nvme_get_controller_health_info", 00:09:09.113 "bdev_nvme_disable_controller", 00:09:09.113 "bdev_nvme_enable_controller", 00:09:09.113 "bdev_nvme_reset_controller", 00:09:09.113 "bdev_nvme_get_transport_statistics", 00:09:09.113 "bdev_nvme_apply_firmware", 00:09:09.113 "bdev_nvme_detach_controller", 00:09:09.113 "bdev_nvme_get_controllers", 00:09:09.113 "bdev_nvme_attach_controller", 00:09:09.113 "bdev_nvme_set_hotplug", 00:09:09.113 "bdev_nvme_set_options", 00:09:09.113 "bdev_passthru_delete", 00:09:09.113 "bdev_passthru_create", 00:09:09.113 "bdev_lvol_set_parent_bdev", 00:09:09.113 "bdev_lvol_set_parent", 00:09:09.113 "bdev_lvol_check_shallow_copy", 00:09:09.113 "bdev_lvol_start_shallow_copy", 00:09:09.113 "bdev_lvol_grow_lvstore", 00:09:09.113 "bdev_lvol_get_lvols", 00:09:09.113 "bdev_lvol_get_lvstores", 00:09:09.113 "bdev_lvol_delete", 00:09:09.113 "bdev_lvol_set_read_only", 00:09:09.113 "bdev_lvol_resize", 00:09:09.113 "bdev_lvol_decouple_parent", 00:09:09.113 "bdev_lvol_inflate", 00:09:09.113 "bdev_lvol_rename", 00:09:09.113 "bdev_lvol_clone_bdev", 00:09:09.113 "bdev_lvol_clone", 00:09:09.113 "bdev_lvol_snapshot", 00:09:09.113 "bdev_lvol_create", 00:09:09.113 "bdev_lvol_delete_lvstore", 00:09:09.113 "bdev_lvol_rename_lvstore", 00:09:09.113 "bdev_lvol_create_lvstore", 00:09:09.113 "bdev_raid_set_options", 00:09:09.113 "bdev_raid_remove_base_bdev", 00:09:09.113 "bdev_raid_add_base_bdev", 00:09:09.113 "bdev_raid_delete", 00:09:09.113 "bdev_raid_create", 00:09:09.113 "bdev_raid_get_bdevs", 00:09:09.113 "bdev_error_inject_error", 00:09:09.113 "bdev_error_delete", 00:09:09.113 "bdev_error_create", 00:09:09.113 "bdev_split_delete", 00:09:09.113 "bdev_split_create", 00:09:09.113 "bdev_delay_delete", 00:09:09.113 "bdev_delay_create", 00:09:09.113 "bdev_delay_update_latency", 00:09:09.113 "bdev_zone_block_delete", 00:09:09.113 "bdev_zone_block_create", 00:09:09.113 "blobfs_create", 00:09:09.113 "blobfs_detect", 00:09:09.113 "blobfs_set_cache_size", 00:09:09.113 "bdev_crypto_delete", 00:09:09.113 "bdev_crypto_create", 00:09:09.113 "bdev_compress_delete", 00:09:09.113 "bdev_compress_create", 00:09:09.113 "bdev_compress_get_orphans", 00:09:09.113 "bdev_aio_delete", 00:09:09.113 "bdev_aio_rescan", 00:09:09.113 "bdev_aio_create", 00:09:09.113 "bdev_ftl_set_property", 00:09:09.113 "bdev_ftl_get_properties", 00:09:09.113 "bdev_ftl_get_stats", 00:09:09.113 "bdev_ftl_unmap", 00:09:09.113 "bdev_ftl_unload", 00:09:09.113 "bdev_ftl_delete", 00:09:09.113 "bdev_ftl_load", 00:09:09.113 "bdev_ftl_create", 00:09:09.113 "bdev_virtio_attach_controller", 00:09:09.113 "bdev_virtio_scsi_get_devices", 00:09:09.113 "bdev_virtio_detach_controller", 00:09:09.113 "bdev_virtio_blk_set_hotplug", 00:09:09.113 "bdev_iscsi_delete", 00:09:09.113 "bdev_iscsi_create", 00:09:09.113 "bdev_iscsi_set_options", 00:09:09.113 "accel_error_inject_error", 00:09:09.113 "ioat_scan_accel_module", 00:09:09.113 "dsa_scan_accel_module", 00:09:09.113 "iaa_scan_accel_module", 00:09:09.113 "dpdk_cryptodev_get_driver", 00:09:09.113 "dpdk_cryptodev_set_driver", 00:09:09.113 "dpdk_cryptodev_scan_accel_module", 00:09:09.113 "compressdev_scan_accel_module", 00:09:09.113 "keyring_file_remove_key", 00:09:09.113 "keyring_file_add_key", 00:09:09.113 "keyring_linux_set_options", 00:09:09.113 "iscsi_get_histogram", 00:09:09.113 "iscsi_enable_histogram", 00:09:09.113 "iscsi_set_options", 00:09:09.113 "iscsi_get_auth_groups", 00:09:09.113 "iscsi_auth_group_remove_secret", 00:09:09.113 "iscsi_auth_group_add_secret", 00:09:09.113 "iscsi_delete_auth_group", 00:09:09.113 "iscsi_create_auth_group", 00:09:09.113 "iscsi_set_discovery_auth", 00:09:09.113 "iscsi_get_options", 00:09:09.113 "iscsi_target_node_request_logout", 00:09:09.113 "iscsi_target_node_set_redirect", 00:09:09.113 "iscsi_target_node_set_auth", 00:09:09.113 "iscsi_target_node_add_lun", 00:09:09.113 "iscsi_get_stats", 00:09:09.113 "iscsi_get_connections", 00:09:09.113 "iscsi_portal_group_set_auth", 00:09:09.113 "iscsi_start_portal_group", 00:09:09.113 "iscsi_delete_portal_group", 00:09:09.113 "iscsi_create_portal_group", 00:09:09.113 "iscsi_get_portal_groups", 00:09:09.113 "iscsi_delete_target_node", 00:09:09.113 "iscsi_target_node_remove_pg_ig_maps", 00:09:09.113 "iscsi_target_node_add_pg_ig_maps", 00:09:09.113 "iscsi_create_target_node", 00:09:09.113 "iscsi_get_target_nodes", 00:09:09.113 "iscsi_delete_initiator_group", 00:09:09.113 "iscsi_initiator_group_remove_initiators", 00:09:09.113 "iscsi_initiator_group_add_initiators", 00:09:09.113 "iscsi_create_initiator_group", 00:09:09.113 "iscsi_get_initiator_groups", 00:09:09.113 "nvmf_set_crdt", 00:09:09.113 "nvmf_set_config", 00:09:09.113 "nvmf_set_max_subsystems", 00:09:09.113 "nvmf_stop_mdns_prr", 00:09:09.113 "nvmf_publish_mdns_prr", 00:09:09.113 "nvmf_subsystem_get_listeners", 00:09:09.113 "nvmf_subsystem_get_qpairs", 00:09:09.113 "nvmf_subsystem_get_controllers", 00:09:09.113 "nvmf_get_stats", 00:09:09.113 "nvmf_get_transports", 00:09:09.113 "nvmf_create_transport", 00:09:09.113 "nvmf_get_targets", 00:09:09.113 "nvmf_delete_target", 00:09:09.113 "nvmf_create_target", 00:09:09.113 "nvmf_subsystem_allow_any_host", 00:09:09.113 "nvmf_subsystem_remove_host", 00:09:09.113 "nvmf_subsystem_add_host", 00:09:09.113 "nvmf_ns_remove_host", 00:09:09.113 "nvmf_ns_add_host", 00:09:09.113 "nvmf_subsystem_remove_ns", 00:09:09.113 "nvmf_subsystem_add_ns", 00:09:09.113 "nvmf_subsystem_listener_set_ana_state", 00:09:09.113 "nvmf_discovery_get_referrals", 00:09:09.113 "nvmf_discovery_remove_referral", 00:09:09.113 "nvmf_discovery_add_referral", 00:09:09.113 "nvmf_subsystem_remove_listener", 00:09:09.113 "nvmf_subsystem_add_listener", 00:09:09.113 "nvmf_delete_subsystem", 00:09:09.113 "nvmf_create_subsystem", 00:09:09.113 "nvmf_get_subsystems", 00:09:09.113 "env_dpdk_get_mem_stats", 00:09:09.113 "nbd_get_disks", 00:09:09.113 "nbd_stop_disk", 00:09:09.113 "nbd_start_disk", 00:09:09.113 "ublk_recover_disk", 00:09:09.113 "ublk_get_disks", 00:09:09.113 "ublk_stop_disk", 00:09:09.113 "ublk_start_disk", 00:09:09.113 "ublk_destroy_target", 00:09:09.113 "ublk_create_target", 00:09:09.113 "virtio_blk_create_transport", 00:09:09.113 "virtio_blk_get_transports", 00:09:09.113 "vhost_controller_set_coalescing", 00:09:09.113 "vhost_get_controllers", 00:09:09.113 "vhost_delete_controller", 00:09:09.113 "vhost_create_blk_controller", 00:09:09.113 "vhost_scsi_controller_remove_target", 00:09:09.113 "vhost_scsi_controller_add_target", 00:09:09.113 "vhost_start_scsi_controller", 00:09:09.113 "vhost_create_scsi_controller", 00:09:09.113 "thread_set_cpumask", 00:09:09.113 "framework_get_scheduler", 00:09:09.113 "framework_set_scheduler", 00:09:09.113 "framework_get_reactors", 00:09:09.113 "thread_get_io_channels", 00:09:09.113 "thread_get_pollers", 00:09:09.113 "thread_get_stats", 00:09:09.113 "framework_monitor_context_switch", 00:09:09.113 "spdk_kill_instance", 00:09:09.113 "log_enable_timestamps", 00:09:09.113 "log_get_flags", 00:09:09.113 "log_clear_flag", 00:09:09.113 "log_set_flag", 00:09:09.113 "log_get_level", 00:09:09.113 "log_set_level", 00:09:09.113 "log_get_print_level", 00:09:09.113 "log_set_print_level", 00:09:09.113 "framework_enable_cpumask_locks", 00:09:09.113 "framework_disable_cpumask_locks", 00:09:09.113 "framework_wait_init", 00:09:09.113 "framework_start_init", 00:09:09.113 "scsi_get_devices", 00:09:09.113 "bdev_get_histogram", 00:09:09.113 "bdev_enable_histogram", 00:09:09.113 "bdev_set_qos_limit", 00:09:09.113 "bdev_set_qd_sampling_period", 00:09:09.113 "bdev_get_bdevs", 00:09:09.113 "bdev_reset_iostat", 00:09:09.113 "bdev_get_iostat", 00:09:09.113 "bdev_examine", 00:09:09.113 "bdev_wait_for_examine", 00:09:09.113 "bdev_set_options", 00:09:09.113 "notify_get_notifications", 00:09:09.113 "notify_get_types", 00:09:09.113 "accel_get_stats", 00:09:09.113 "accel_set_options", 00:09:09.113 "accel_set_driver", 00:09:09.113 "accel_crypto_key_destroy", 00:09:09.113 "accel_crypto_keys_get", 00:09:09.113 "accel_crypto_key_create", 00:09:09.113 "accel_assign_opc", 00:09:09.113 "accel_get_module_info", 00:09:09.113 "accel_get_opc_assignments", 00:09:09.113 "vmd_rescan", 00:09:09.113 "vmd_remove_device", 00:09:09.113 "vmd_enable", 00:09:09.113 "sock_get_default_impl", 00:09:09.113 "sock_set_default_impl", 00:09:09.113 "sock_impl_set_options", 00:09:09.113 "sock_impl_get_options", 00:09:09.113 "iobuf_get_stats", 00:09:09.113 "iobuf_set_options", 00:09:09.113 "framework_get_pci_devices", 00:09:09.113 "framework_get_config", 00:09:09.113 "framework_get_subsystems", 00:09:09.113 "trace_get_info", 00:09:09.113 "trace_get_tpoint_group_mask", 00:09:09.114 "trace_disable_tpoint_group", 00:09:09.114 "trace_enable_tpoint_group", 00:09:09.114 "trace_clear_tpoint_mask", 00:09:09.114 "trace_set_tpoint_mask", 00:09:09.114 "keyring_get_keys", 00:09:09.114 "spdk_get_version", 00:09:09.114 "rpc_get_methods" 00:09:09.114 ] 00:09:09.114 18:53:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 18:53:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:09.114 18:53:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1577174 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1577174 ']' 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1577174 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1577174 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1577174' 00:09:09.114 killing process with pid 1577174 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1577174 00:09:09.114 18:53:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1577174 00:09:09.727 00:09:09.727 real 0m1.724s 00:09:09.727 user 0m3.089s 00:09:09.727 sys 0m0.609s 00:09:09.727 18:53:24 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:09.727 18:53:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.727 ************************************ 00:09:09.727 END TEST spdkcli_tcp 00:09:09.727 ************************************ 00:09:09.727 18:53:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:09.727 18:53:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:09.727 18:53:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:09.727 18:53:24 -- common/autotest_common.sh@10 -- # set +x 00:09:09.727 ************************************ 00:09:09.727 START TEST dpdk_mem_utility 00:09:09.727 ************************************ 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:09.727 * Looking for test storage... 00:09:09.727 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:09:09.727 18:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:09.727 18:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1577517 00:09:09.727 18:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1577517 00:09:09.727 18:53:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1577517 ']' 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:09.727 18:53:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:09.727 [2024-06-10 18:53:24.428438] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:09.728 [2024-06-10 18:53:24.428500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577517 ] 00:09:09.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.985 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:09.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.985 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:09.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.985 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:09.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.985 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:09.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.985 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:09.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.985 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:09.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.986 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:09.986 [2024-06-10 18:53:24.560956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.986 [2024-06-10 18:53:24.644696] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.552 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:10.552 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:09:10.552 18:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:10.552 18:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:10.552 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.552 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.553 { 00:09:10.553 "filename": "/tmp/spdk_mem_dump.txt" 00:09:10.553 } 00:09:10.553 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.553 18:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:10.815 DPDK memory size 816.000000 MiB in 2 heap(s) 00:09:10.815 2 heaps totaling size 816.000000 MiB 00:09:10.815 size: 814.000000 MiB heap id: 0 00:09:10.815 size: 2.000000 MiB heap id: 1 00:09:10.815 end heaps---------- 00:09:10.815 8 mempools totaling size 598.116089 MiB 00:09:10.815 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:10.815 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:10.815 size: 84.521057 MiB name: bdev_io_1577517 00:09:10.815 size: 51.011292 MiB name: evtpool_1577517 00:09:10.815 size: 50.003479 MiB name: msgpool_1577517 00:09:10.815 size: 21.763794 MiB name: PDU_Pool 00:09:10.815 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:10.815 size: 0.026123 MiB name: Session_Pool 00:09:10.815 end mempools------- 00:09:10.815 201 memzones totaling size 4.176453 MiB 00:09:10.815 size: 1.000366 MiB name: RG_ring_0_1577517 00:09:10.815 size: 1.000366 MiB name: RG_ring_1_1577517 00:09:10.815 size: 1.000366 MiB name: RG_ring_4_1577517 00:09:10.815 size: 1.000366 MiB name: RG_ring_5_1577517 00:09:10.815 size: 0.125366 MiB name: RG_ring_2_1577517 00:09:10.815 size: 0.015991 MiB name: RG_ring_3_1577517 00:09:10.815 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.0_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.1_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.2_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.3_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.4_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.5_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.6_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:01.7_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.0_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.1_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.2_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.3_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.4_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.5_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.6_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3d:02.7_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.0_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.1_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.2_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.3_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.4_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.5_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.6_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:01.7_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.0_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.1_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.2_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.3_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.4_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.5_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.6_qat 00:09:10.815 size: 0.000305 MiB name: 0000:3f:02.7_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.0_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.1_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.2_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.3_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.4_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.5_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.6_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:01.7_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.0_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.1_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.2_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.3_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.4_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.5_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.6_qat 00:09:10.815 size: 0.000305 MiB name: 0000:b4:02.7_qat 00:09:10.815 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_0 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_1 00:09:10.815 size: 0.000122 MiB name: rte_compressdev_data_0 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_2 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_3 00:09:10.815 size: 0.000122 MiB name: rte_compressdev_data_1 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_4 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_5 00:09:10.815 size: 0.000122 MiB name: rte_compressdev_data_2 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_6 00:09:10.815 size: 0.000122 MiB name: rte_cryptodev_data_7 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_3 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_8 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_9 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_4 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_10 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_11 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_5 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_12 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_13 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_6 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_14 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_15 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_7 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_16 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_17 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_8 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_18 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_19 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_9 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_20 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_21 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_10 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_22 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_23 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_11 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_24 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_25 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_12 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_26 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_27 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_13 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_28 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_29 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_14 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_30 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_31 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_15 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_32 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_33 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_16 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_34 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_35 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_17 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_36 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_37 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_18 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_38 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_39 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_19 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_40 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_41 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_20 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_42 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_43 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_21 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_44 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_45 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_22 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_46 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_47 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_23 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_48 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_49 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_24 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_50 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_51 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_25 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_52 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_53 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_26 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_54 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_55 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_27 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_56 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_57 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_28 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_58 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_59 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_29 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_60 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_61 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_30 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_62 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_63 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_31 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_64 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_65 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_32 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_66 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_67 00:09:10.816 size: 0.000122 MiB name: rte_compressdev_data_33 00:09:10.816 size: 0.000122 MiB name: rte_cryptodev_data_68 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_69 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_34 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_70 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_71 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_35 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_72 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_73 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_36 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_74 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_75 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_37 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_76 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_77 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_38 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_78 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_79 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_39 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_80 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_81 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_40 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_82 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_83 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_41 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_84 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_85 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_42 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_86 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_87 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_43 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_88 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_89 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_44 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_90 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_91 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_45 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_92 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_93 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_46 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_94 00:09:10.817 size: 0.000122 MiB name: rte_cryptodev_data_95 00:09:10.817 size: 0.000122 MiB name: rte_compressdev_data_47 00:09:10.817 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:09:10.817 end memzones------- 00:09:10.817 18:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:10.817 heap id: 0 total size: 814.000000 MiB number of busy elements: 621 number of free elements: 14 00:09:10.817 list of free elements. size: 11.796143 MiB 00:09:10.817 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:10.817 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:10.817 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:10.817 element at address: 0x200003e00000 with size: 0.996460 MiB 00:09:10.817 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:10.817 element at address: 0x200013800000 with size: 0.978882 MiB 00:09:10.817 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:10.817 element at address: 0x200019200000 with size: 0.937256 MiB 00:09:10.817 element at address: 0x20001aa00000 with size: 0.572449 MiB 00:09:10.817 element at address: 0x200003a00000 with size: 0.498535 MiB 00:09:10.817 element at address: 0x20000b200000 with size: 0.491272 MiB 00:09:10.817 element at address: 0x200000800000 with size: 0.486145 MiB 00:09:10.817 element at address: 0x200019400000 with size: 0.485840 MiB 00:09:10.817 element at address: 0x200027e00000 with size: 0.395752 MiB 00:09:10.817 list of standard malloc elements. size: 199.895569 MiB 00:09:10.817 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:10.817 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:10.817 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:10.817 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:10.817 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:10.817 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:10.817 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:10.817 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:10.817 element at address: 0x200000330b40 with size: 0.004395 MiB 00:09:10.817 element at address: 0x2000003340c0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x200000337640 with size: 0.004395 MiB 00:09:10.817 element at address: 0x20000033abc0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x20000033e140 with size: 0.004395 MiB 00:09:10.817 element at address: 0x2000003416c0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x200000344c40 with size: 0.004395 MiB 00:09:10.817 element at address: 0x2000003481c0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x20000034b740 with size: 0.004395 MiB 00:09:10.817 element at address: 0x20000034ecc0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x200000352240 with size: 0.004395 MiB 00:09:10.817 element at address: 0x2000003557c0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x200000358d40 with size: 0.004395 MiB 00:09:10.817 element at address: 0x20000035c2c0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x20000035f840 with size: 0.004395 MiB 00:09:10.817 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:09:10.817 element at address: 0x200000366880 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000036a340 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000036de00 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000375380 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000378e40 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000037c900 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000383e80 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000387940 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000038b400 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000392980 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000396440 with size: 0.004395 MiB 00:09:10.818 element at address: 0x200000399f00 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:09:10.818 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:09:10.818 element at address: 0x20000032ea40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000032fac0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000331fc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000333040 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000335540 with size: 0.004028 MiB 00:09:10.818 element at address: 0x2000003365c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000338ac0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000339b40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000033c040 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000033d0c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000033f5c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000340640 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000342b40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000343bc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x2000003460c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000347140 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000349640 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000034a6c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000034cbc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000034dc40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000350140 with size: 0.004028 MiB 00:09:10.818 element at address: 0x2000003511c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x2000003536c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000354740 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000356c40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000357cc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000035a1c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000035b240 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000035d740 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000035e7c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000361d40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000364780 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000365800 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000368240 with size: 0.004028 MiB 00:09:10.818 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000370840 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000373280 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000374300 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000376d40 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000037a800 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000037b880 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000037f340 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000381d80 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000382e00 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000385840 with size: 0.004028 MiB 00:09:10.818 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x200000389300 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000038a380 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:09:10.818 element at address: 0x20000038de40 with size: 0.004028 MiB 00:09:10.819 element at address: 0x200000390880 with size: 0.004028 MiB 00:09:10.819 element at address: 0x200000391900 with size: 0.004028 MiB 00:09:10.819 element at address: 0x200000394340 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x200000397e00 with size: 0.004028 MiB 00:09:10.819 element at address: 0x200000398e80 with size: 0.004028 MiB 00:09:10.819 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x20000039c940 with size: 0.004028 MiB 00:09:10.819 element at address: 0x20000039f380 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:09:10.819 element at address: 0x2000002030c0 with size: 0.000305 MiB 00:09:10.819 element at address: 0x200000200000 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002000c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200180 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200240 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200300 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002003c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200480 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200540 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200600 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002006c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200780 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200840 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200900 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002009c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200a80 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200b40 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200c00 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200cc0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200d80 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200e40 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200f00 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000200fc0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201080 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201140 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201200 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002012c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201380 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201440 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201500 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002015c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201680 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201740 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201800 with size: 0.000183 MiB 00:09:10.819 element at address: 0x2000002018c0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201980 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201a40 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201b00 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201bc0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201c80 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201d40 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201e00 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201ec0 with size: 0.000183 MiB 00:09:10.819 element at address: 0x200000201f80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202040 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202100 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002021c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202280 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202340 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202400 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002024c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202580 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202640 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202700 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002027c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202880 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202940 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202a00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202ac0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202b80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202c40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202d00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202dc0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202e80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000202f40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203000 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203200 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002032c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203380 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203440 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203500 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002035c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203680 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203740 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203800 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002038c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203980 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203a40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203b00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203bc0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203c80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203d40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203e00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203ec0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000203f80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204040 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204100 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002041c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204280 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204340 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204400 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002044c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204580 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204640 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204700 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002047c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204880 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204940 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204a00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204ac0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204b80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204c40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204d00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204dc0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204e80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000204f40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000205000 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002050c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000205180 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000205240 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000205440 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000209700 with size: 0.000183 MiB 00:09:10.820 element at address: 0x2000002299c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229a80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229b40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229c00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229cc0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229d80 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229e40 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229f00 with size: 0.000183 MiB 00:09:10.820 element at address: 0x200000229fc0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a080 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a140 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a200 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a2c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a380 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a440 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a500 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a5c0 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a680 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a740 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a800 with size: 0.000183 MiB 00:09:10.820 element at address: 0x20000022a8c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022a980 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022aa40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022ab00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022abc0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022ac80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022ad40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022ae00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022aec0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022af80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b180 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b240 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b300 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b3c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b480 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b540 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b600 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b6c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b780 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b840 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b900 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022b9c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022ba80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022bb40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022bc00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022bcc0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022bd80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022be40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022bf00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022bfc0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c080 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c140 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c200 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c2c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c380 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c440 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000022c500 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000032e700 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000032e7c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000331d40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x2000003352c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000338840 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000033f340 with size: 0.000183 MiB 00:09:10.821 element at address: 0x2000003428c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000345e40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x2000003493c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000034c940 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000034fec0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000353440 with size: 0.000183 MiB 00:09:10.821 element at address: 0x2000003569c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000359f40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000035d4c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000360a40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000364180 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000364240 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000364400 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000367a80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000367c40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000367d00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036b540 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036b700 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036b980 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036f000 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036f280 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000036f440 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000372c80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000372d40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000372f00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000376580 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000376740 with size: 0.000183 MiB 00:09:10.821 element at address: 0x200000376800 with size: 0.000183 MiB 00:09:10.821 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037a040 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037a200 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037a480 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037db00 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:09:10.821 element at address: 0x20000037df40 with size: 0.000183 MiB 00:09:10.821 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000381780 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000381840 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000381a00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000385080 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000385240 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000385300 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000388b40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000388d00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000388f80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000038c600 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000038c880 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000390280 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000390340 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000390500 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000393b80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000393d40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000393e00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000397640 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000397800 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x200000397a80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039b100 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039b380 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039b540 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000039f000 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:09:10.822 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000087c740 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000087c800 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:10.822 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:10.823 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:10.823 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e65500 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:09:10.823 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:10.824 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:10.824 list of memzone associated elements. size: 602.308289 MiB 00:09:10.824 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:10.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:10.824 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:10.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:10.824 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:10.824 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1577517_0 00:09:10.824 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:10.824 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1577517_0 00:09:10.824 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:10.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1577517_0 00:09:10.824 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:10.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:10.824 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:10.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:10.824 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:10.824 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1577517 00:09:10.825 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:10.825 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1577517 00:09:10.825 element at address: 0x20000022c5c0 with size: 1.008118 MiB 00:09:10.825 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1577517 00:09:10.825 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:10.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:10.825 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:10.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:10.825 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:10.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:10.825 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:10.825 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:10.825 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:10.825 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1577517 00:09:10.825 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:10.825 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1577517 00:09:10.825 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:10.825 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1577517 00:09:10.825 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:10.825 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1577517 00:09:10.825 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:09:10.825 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1577517 00:09:10.825 element at address: 0x20000b27dc40 with size: 0.500488 MiB 00:09:10.825 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:10.825 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:10.825 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:10.825 element at address: 0x20001947c600 with size: 0.250488 MiB 00:09:10.825 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:10.825 element at address: 0x2000002097c0 with size: 0.125488 MiB 00:09:10.825 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1577517 00:09:10.825 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:10.825 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:10.825 element at address: 0x200027e65680 with size: 0.023743 MiB 00:09:10.825 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:10.825 element at address: 0x200000205500 with size: 0.016113 MiB 00:09:10.825 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1577517 00:09:10.825 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:09:10.825 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:10.825 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:09:10.825 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:09:10.825 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.0_qat 00:09:10.825 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.1_qat 00:09:10.825 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.2_qat 00:09:10.825 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.3_qat 00:09:10.825 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.4_qat 00:09:10.825 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.5_qat 00:09:10.825 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.6_qat 00:09:10.825 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.7_qat 00:09:10.825 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.0_qat 00:09:10.825 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.1_qat 00:09:10.825 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.2_qat 00:09:10.825 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.3_qat 00:09:10.825 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.4_qat 00:09:10.825 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.5_qat 00:09:10.825 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.6_qat 00:09:10.825 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.7_qat 00:09:10.825 element at address: 0x20000039b700 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.0_qat 00:09:10.825 element at address: 0x200000397c40 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.1_qat 00:09:10.825 element at address: 0x200000394180 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.2_qat 00:09:10.825 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:09:10.825 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.3_qat 00:09:10.825 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.4_qat 00:09:10.826 element at address: 0x200000389140 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.5_qat 00:09:10.826 element at address: 0x200000385680 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.6_qat 00:09:10.826 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.7_qat 00:09:10.826 element at address: 0x20000037e100 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.0_qat 00:09:10.826 element at address: 0x20000037a640 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.1_qat 00:09:10.826 element at address: 0x200000376b80 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.2_qat 00:09:10.826 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.3_qat 00:09:10.826 element at address: 0x20000036f600 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.4_qat 00:09:10.826 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.5_qat 00:09:10.826 element at address: 0x200000368080 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.6_qat 00:09:10.826 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.7_qat 00:09:10.826 element at address: 0x200000360b00 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.0_qat 00:09:10.826 element at address: 0x20000035d580 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.1_qat 00:09:10.826 element at address: 0x20000035a000 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.2_qat 00:09:10.826 element at address: 0x200000356a80 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.3_qat 00:09:10.826 element at address: 0x200000353500 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.4_qat 00:09:10.826 element at address: 0x20000034ff80 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.5_qat 00:09:10.826 element at address: 0x20000034ca00 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.6_qat 00:09:10.826 element at address: 0x200000349480 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:01.7_qat 00:09:10.826 element at address: 0x200000345f00 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.0_qat 00:09:10.826 element at address: 0x200000342980 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.1_qat 00:09:10.826 element at address: 0x20000033f400 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.2_qat 00:09:10.826 element at address: 0x20000033be80 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.3_qat 00:09:10.826 element at address: 0x200000338900 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.4_qat 00:09:10.826 element at address: 0x200000335380 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.5_qat 00:09:10.826 element at address: 0x200000331e00 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.6_qat 00:09:10.826 element at address: 0x20000032e880 with size: 0.000427 MiB 00:09:10.826 associated memzone info: size: 0.000305 MiB name: 0000:b4:02.7_qat 00:09:10.826 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:09:10.826 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:09:10.826 element at address: 0x20000022b040 with size: 0.000305 MiB 00:09:10.826 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1577517 00:09:10.826 element at address: 0x200000205300 with size: 0.000305 MiB 00:09:10.826 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1577517 00:09:10.826 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:09:10.826 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:10.826 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:09:10.826 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:09:10.826 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:09:10.826 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:09:10.826 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:09:10.826 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:09:10.826 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:09:10.826 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:09:10.826 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:09:10.826 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:09:10.826 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:09:10.826 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:09:10.827 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:09:10.827 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:09:10.827 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:09:10.827 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:09:10.827 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:09:10.827 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:09:10.827 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:09:10.827 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:09:10.827 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:09:10.827 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:09:10.827 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:09:10.827 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:09:10.827 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:09:10.827 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:09:10.827 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:09:10.827 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:09:10.827 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:09:10.827 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:09:10.827 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:09:10.827 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:09:10.827 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:09:10.827 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:09:10.827 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:09:10.827 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:09:10.827 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:09:10.827 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:09:10.827 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:09:10.827 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:09:10.827 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:09:10.827 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:09:10.827 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:09:10.827 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:09:10.827 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:09:10.827 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:09:10.827 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:09:10.827 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:09:10.827 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:09:10.827 element at address: 0x20000039b600 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:09:10.827 element at address: 0x20000039b440 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:09:10.827 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:09:10.827 element at address: 0x200000397b40 with size: 0.000244 MiB 00:09:10.827 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:09:10.827 element at address: 0x200000397980 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:09:10.828 element at address: 0x200000397700 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:09:10.828 element at address: 0x200000394080 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:09:10.828 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:09:10.828 element at address: 0x200000393c40 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:09:10.828 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:09:10.828 element at address: 0x200000390400 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:09:10.828 element at address: 0x200000390180 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:09:10.828 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:09:10.828 element at address: 0x20000038c940 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:09:10.828 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:09:10.828 element at address: 0x200000389040 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:09:10.828 element at address: 0x200000388e80 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:09:10.828 element at address: 0x200000388c00 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:09:10.828 element at address: 0x200000385580 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:09:10.828 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:09:10.828 element at address: 0x200000385140 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:09:10.828 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:09:10.828 element at address: 0x200000381900 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:09:10.828 element at address: 0x200000381680 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:09:10.828 element at address: 0x20000037e000 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:09:10.828 element at address: 0x20000037de40 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:09:10.828 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:09:10.828 element at address: 0x20000037a540 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:09:10.828 element at address: 0x20000037a380 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:09:10.828 element at address: 0x20000037a100 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:09:10.828 element at address: 0x200000376a80 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:09:10.828 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:09:10.828 element at address: 0x200000376640 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:09:10.828 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:09:10.828 element at address: 0x200000372e00 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:09:10.828 element at address: 0x200000372b80 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:09:10.828 element at address: 0x20000036f500 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:09:10.828 element at address: 0x20000036f340 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:09:10.828 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:09:10.828 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:09:10.828 element at address: 0x20000036b880 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:09:10.828 element at address: 0x20000036b600 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:09:10.828 element at address: 0x200000367f80 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:09:10.828 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:09:10.828 element at address: 0x200000367b40 with size: 0.000244 MiB 00:09:10.828 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:09:10.828 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:09:10.829 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:09:10.829 element at address: 0x200000364300 with size: 0.000244 MiB 00:09:10.829 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:09:10.829 element at address: 0x200000364080 with size: 0.000244 MiB 00:09:10.829 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:09:10.829 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:09:10.829 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:09:10.829 18:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:10.829 18:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1577517 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1577517 ']' 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1577517 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1577517 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1577517' 00:09:10.829 killing process with pid 1577517 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1577517 00:09:10.829 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1577517 00:09:11.394 00:09:11.394 real 0m1.612s 00:09:11.394 user 0m1.707s 00:09:11.394 sys 0m0.537s 00:09:11.394 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:11.394 18:53:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:11.394 ************************************ 00:09:11.394 END TEST dpdk_mem_utility 00:09:11.394 ************************************ 00:09:11.394 18:53:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:09:11.394 18:53:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:11.394 18:53:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:11.394 18:53:25 -- common/autotest_common.sh@10 -- # set +x 00:09:11.394 ************************************ 00:09:11.394 START TEST event 00:09:11.394 ************************************ 00:09:11.394 18:53:25 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:09:11.394 * Looking for test storage... 00:09:11.394 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:09:11.394 18:53:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:11.394 18:53:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:11.394 18:53:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:11.394 18:53:26 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:09:11.394 18:53:26 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:11.394 18:53:26 event -- common/autotest_common.sh@10 -- # set +x 00:09:11.394 ************************************ 00:09:11.394 START TEST event_perf 00:09:11.394 ************************************ 00:09:11.394 18:53:26 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:11.394 Running I/O for 1 seconds...[2024-06-10 18:53:26.114802] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:11.394 [2024-06-10 18:53:26.114862] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577844 ] 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:11.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.651 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:11.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.652 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:11.652 [2024-06-10 18:53:26.246990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.652 [2024-06-10 18:53:26.334604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.652 [2024-06-10 18:53:26.334694] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.652 [2024-06-10 18:53:26.334809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.652 [2024-06-10 18:53:26.334810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.022 Running I/O for 1 seconds... 00:09:13.022 lcore 0: 185795 00:09:13.022 lcore 1: 185791 00:09:13.022 lcore 2: 185792 00:09:13.022 lcore 3: 185793 00:09:13.022 done. 00:09:13.022 00:09:13.022 real 0m1.323s 00:09:13.022 user 0m4.169s 00:09:13.022 sys 0m0.148s 00:09:13.022 18:53:27 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.022 18:53:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 ************************************ 00:09:13.022 END TEST event_perf 00:09:13.022 ************************************ 00:09:13.022 18:53:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:13.022 18:53:27 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:13.022 18:53:27 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.022 18:53:27 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 ************************************ 00:09:13.022 START TEST event_reactor 00:09:13.022 ************************************ 00:09:13.022 18:53:27 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:13.022 [2024-06-10 18:53:27.517552] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:13.022 [2024-06-10 18:53:27.517615] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578127 ] 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.022 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:13.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:13.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:13.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:13.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:13.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:13.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:13.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.023 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:13.023 [2024-06-10 18:53:27.651108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.023 [2024-06-10 18:53:27.734956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.396 test_start 00:09:14.396 oneshot 00:09:14.396 tick 100 00:09:14.396 tick 100 00:09:14.396 tick 250 00:09:14.396 tick 100 00:09:14.396 tick 100 00:09:14.396 tick 100 00:09:14.396 tick 250 00:09:14.396 tick 500 00:09:14.396 tick 100 00:09:14.396 tick 100 00:09:14.396 tick 250 00:09:14.396 tick 100 00:09:14.396 tick 100 00:09:14.396 test_end 00:09:14.396 00:09:14.396 real 0m1.321s 00:09:14.396 user 0m1.183s 00:09:14.396 sys 0m0.132s 00:09:14.396 18:53:28 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:14.396 18:53:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:14.396 ************************************ 00:09:14.396 END TEST event_reactor 00:09:14.396 ************************************ 00:09:14.396 18:53:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:14.396 18:53:28 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:14.396 18:53:28 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:14.396 18:53:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.396 ************************************ 00:09:14.396 START TEST event_reactor_perf 00:09:14.396 ************************************ 00:09:14.396 18:53:28 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:14.396 [2024-06-10 18:53:28.914523] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:14.396 [2024-06-10 18:53:28.914585] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578417 ] 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:14.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.396 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:14.396 [2024-06-10 18:53:29.047387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.396 [2024-06-10 18:53:29.130887] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.769 test_start 00:09:15.769 test_end 00:09:15.769 Performance: 356218 events per second 00:09:15.769 00:09:15.769 real 0m1.317s 00:09:15.769 user 0m1.182s 00:09:15.769 sys 0m0.129s 00:09:15.769 18:53:30 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.769 18:53:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:15.769 ************************************ 00:09:15.769 END TEST event_reactor_perf 00:09:15.769 ************************************ 00:09:15.769 18:53:30 event -- event/event.sh@49 -- # uname -s 00:09:15.769 18:53:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:15.769 18:53:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:15.769 18:53:30 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:15.769 18:53:30 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.769 18:53:30 event -- common/autotest_common.sh@10 -- # set +x 00:09:15.769 ************************************ 00:09:15.769 START TEST event_scheduler 00:09:15.769 ************************************ 00:09:15.769 18:53:30 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:15.769 * Looking for test storage... 00:09:15.769 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:09:15.769 18:53:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:15.769 18:53:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1578730 00:09:15.769 18:53:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.770 18:53:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:15.770 18:53:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1578730 00:09:15.770 18:53:30 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1578730 ']' 00:09:15.770 18:53:30 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.770 18:53:30 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:15.770 18:53:30 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.770 18:53:30 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:15.770 18:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.770 [2024-06-10 18:53:30.459393] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:15.770 [2024-06-10 18:53:30.459458] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578730 ] 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:15.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.770 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:16.027 [2024-06-10 18:53:30.563770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.027 [2024-06-10 18:53:30.643898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.027 [2024-06-10 18:53:30.643985] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.027 [2024-06-10 18:53:30.644094] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.027 [2024-06-10 18:53:30.644095] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:09:16.958 18:53:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 POWER: Env isn't set yet! 00:09:16.958 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:16.958 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:16.958 POWER: Cannot set governor of lcore 0 to userspace 00:09:16.958 POWER: Attempting to initialise PSTAT power management... 00:09:16.958 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:09:16.958 POWER: Initialized successfully for lcore 0 power management 00:09:16.958 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:09:16.958 POWER: Initialized successfully for lcore 1 power management 00:09:16.958 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:09:16.958 POWER: Initialized successfully for lcore 2 power management 00:09:16.958 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:09:16.958 POWER: Initialized successfully for lcore 3 power management 00:09:16.958 [2024-06-10 18:53:31.425743] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:16.958 [2024-06-10 18:53:31.425758] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:16.958 [2024-06-10 18:53:31.425769] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.958 18:53:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 [2024-06-10 18:53:31.515506] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.958 18:53:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:16.958 18:53:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:16.958 ************************************ 00:09:16.958 START TEST scheduler_create_thread 00:09:16.958 ************************************ 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 2 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 3 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 4 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 5 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 6 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 7 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 8 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 9 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 10 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.959 18:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.855 18:53:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.855 18:53:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:18.855 18:53:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:18.855 18:53:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.855 18:53:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.419 18:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.419 00:09:19.419 real 0m2.620s 00:09:19.419 user 0m0.021s 00:09:19.419 sys 0m0.009s 00:09:19.419 18:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:19.419 18:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.419 ************************************ 00:09:19.419 END TEST scheduler_create_thread 00:09:19.419 ************************************ 00:09:19.676 18:53:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:19.676 18:53:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1578730 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1578730 ']' 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1578730 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1578730 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1578730' 00:09:19.676 killing process with pid 1578730 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1578730 00:09:19.676 18:53:34 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1578730 00:09:19.934 [2024-06-10 18:53:34.653912] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:20.192 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:09:20.192 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:09:20.192 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:09:20.192 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:09:20.192 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:09:20.192 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:09:20.192 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:09:20.192 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:09:20.192 00:09:20.192 real 0m4.546s 00:09:20.192 user 0m8.664s 00:09:20.192 sys 0m0.509s 00:09:20.192 18:53:34 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:20.193 18:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.193 ************************************ 00:09:20.193 END TEST event_scheduler 00:09:20.193 ************************************ 00:09:20.193 18:53:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:20.193 18:53:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:20.193 18:53:34 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:20.193 18:53:34 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:20.193 18:53:34 event -- common/autotest_common.sh@10 -- # set +x 00:09:20.193 ************************************ 00:09:20.193 START TEST app_repeat 00:09:20.193 ************************************ 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1579577 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1579577' 00:09:20.193 Process app_repeat pid: 1579577 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:20.193 spdk_app_start Round 0 00:09:20.193 18:53:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1579577 /var/tmp/spdk-nbd.sock 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1579577 ']' 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:20.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:20.193 18:53:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:20.451 [2024-06-10 18:53:34.972564] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:20.451 [2024-06-10 18:53:34.972631] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579577 ] 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:20.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.452 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:20.452 [2024-06-10 18:53:35.107061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.452 [2024-06-10 18:53:35.195225] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.452 [2024-06-10 18:53:35.195231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.388 18:53:35 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:21.388 18:53:35 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:09:21.388 18:53:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.388 Malloc0 00:09:21.388 18:53:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.647 Malloc1 00:09:21.647 18:53:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.647 18:53:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:21.906 /dev/nbd0 00:09:21.906 18:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:21.906 18:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:21.906 1+0 records in 00:09:21.906 1+0 records out 00:09:21.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231138 s, 17.7 MB/s 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:21.906 18:53:36 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:09:21.906 18:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.906 18:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.906 18:53:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:22.164 /dev/nbd1 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.164 1+0 records in 00:09:22.164 1+0 records out 00:09:22.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269852 s, 15.2 MB/s 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:22.164 18:53:36 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.164 18:53:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.421 18:53:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:22.421 { 00:09:22.421 "nbd_device": "/dev/nbd0", 00:09:22.421 "bdev_name": "Malloc0" 00:09:22.421 }, 00:09:22.421 { 00:09:22.421 "nbd_device": "/dev/nbd1", 00:09:22.421 "bdev_name": "Malloc1" 00:09:22.421 } 00:09:22.421 ]' 00:09:22.421 18:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:22.422 { 00:09:22.422 "nbd_device": "/dev/nbd0", 00:09:22.422 "bdev_name": "Malloc0" 00:09:22.422 }, 00:09:22.422 { 00:09:22.422 "nbd_device": "/dev/nbd1", 00:09:22.422 "bdev_name": "Malloc1" 00:09:22.422 } 00:09:22.422 ]' 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:22.422 /dev/nbd1' 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:22.422 /dev/nbd1' 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:22.422 256+0 records in 00:09:22.422 256+0 records out 00:09:22.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114302 s, 91.7 MB/s 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:22.422 256+0 records in 00:09:22.422 256+0 records out 00:09:22.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167688 s, 62.5 MB/s 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.422 18:53:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:22.681 256+0 records in 00:09:22.681 256+0 records out 00:09:22.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287828 s, 36.4 MB/s 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.681 18:53:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.939 18:53:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:23.198 18:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:23.456 18:53:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:23.456 18:53:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:23.714 18:53:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:23.714 [2024-06-10 18:53:38.464240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.972 [2024-06-10 18:53:38.543708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.972 [2024-06-10 18:53:38.543714] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.972 [2024-06-10 18:53:38.588287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:23.972 [2024-06-10 18:53:38.588334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:26.502 18:53:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:26.502 18:53:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:26.502 spdk_app_start Round 1 00:09:26.502 18:53:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1579577 /var/tmp/spdk-nbd.sock 00:09:26.502 18:53:41 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1579577 ']' 00:09:26.502 18:53:41 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:26.502 18:53:41 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:26.502 18:53:41 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:26.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:26.502 18:53:41 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:26.502 18:53:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 18:53:41 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:26.759 18:53:41 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:09:26.760 18:53:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.017 Malloc0 00:09:27.017 18:53:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:27.276 Malloc1 00:09:27.276 18:53:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.276 18:53:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:27.544 /dev/nbd0 00:09:27.544 18:53:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:27.544 18:53:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:27.544 1+0 records in 00:09:27.544 1+0 records out 00:09:27.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261435 s, 15.7 MB/s 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:27.544 18:53:42 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:09:27.544 18:53:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.544 18:53:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.544 18:53:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:27.802 /dev/nbd1 00:09:27.802 18:53:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:27.802 18:53:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:27.802 18:53:42 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:27.802 1+0 records in 00:09:27.802 1+0 records out 00:09:27.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235701 s, 17.4 MB/s 00:09:27.803 18:53:42 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:27.803 18:53:42 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:09:27.803 18:53:42 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:27.803 18:53:42 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:27.803 18:53:42 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:09:27.803 18:53:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.803 18:53:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.803 18:53:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:27.803 18:53:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.803 18:53:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:28.060 { 00:09:28.060 "nbd_device": "/dev/nbd0", 00:09:28.060 "bdev_name": "Malloc0" 00:09:28.060 }, 00:09:28.060 { 00:09:28.060 "nbd_device": "/dev/nbd1", 00:09:28.060 "bdev_name": "Malloc1" 00:09:28.060 } 00:09:28.060 ]' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:28.060 { 00:09:28.060 "nbd_device": "/dev/nbd0", 00:09:28.060 "bdev_name": "Malloc0" 00:09:28.060 }, 00:09:28.060 { 00:09:28.060 "nbd_device": "/dev/nbd1", 00:09:28.060 "bdev_name": "Malloc1" 00:09:28.060 } 00:09:28.060 ]' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:28.060 /dev/nbd1' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:28.060 /dev/nbd1' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:28.060 256+0 records in 00:09:28.060 256+0 records out 00:09:28.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114277 s, 91.8 MB/s 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:28.060 256+0 records in 00:09:28.060 256+0 records out 00:09:28.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276992 s, 37.9 MB/s 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.060 18:53:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:28.060 256+0 records in 00:09:28.060 256+0 records out 00:09:28.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184099 s, 57.0 MB/s 00:09:28.318 18:53:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:28.318 18:53:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.319 18:53:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:28.576 18:53:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:28.576 18:53:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:28.576 18:53:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:28.576 18:53:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.576 18:53:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.577 18:53:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:28.577 18:53:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:28.577 18:53:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.577 18:53:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.577 18:53:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:28.834 18:53:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:29.091 18:53:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:29.091 18:53:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:29.350 18:53:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:29.608 [2024-06-10 18:53:44.107298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.608 [2024-06-10 18:53:44.187362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.608 [2024-06-10 18:53:44.187366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.608 [2024-06-10 18:53:44.233022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:29.608 [2024-06-10 18:53:44.233070] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:32.138 18:53:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:32.138 18:53:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:32.138 spdk_app_start Round 2 00:09:32.138 18:53:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1579577 /var/tmp/spdk-nbd.sock 00:09:32.138 18:53:46 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1579577 ']' 00:09:32.138 18:53:46 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:32.138 18:53:46 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:32.138 18:53:46 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:32.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:32.138 18:53:46 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:32.138 18:53:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:32.396 18:53:47 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:32.396 18:53:47 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:09:32.396 18:53:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.654 Malloc0 00:09:32.654 18:53:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.912 Malloc1 00:09:32.912 18:53:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.912 18:53:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:33.200 /dev/nbd0 00:09:33.200 18:53:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:33.200 18:53:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.200 1+0 records in 00:09:33.200 1+0 records out 00:09:33.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260417 s, 15.7 MB/s 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:33.200 18:53:47 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:09:33.200 18:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.200 18:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.200 18:53:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:33.511 /dev/nbd1 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.511 1+0 records in 00:09:33.511 1+0 records out 00:09:33.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280574 s, 14.6 MB/s 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:09:33.511 18:53:48 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.511 18:53:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:33.769 { 00:09:33.769 "nbd_device": "/dev/nbd0", 00:09:33.769 "bdev_name": "Malloc0" 00:09:33.769 }, 00:09:33.769 { 00:09:33.769 "nbd_device": "/dev/nbd1", 00:09:33.769 "bdev_name": "Malloc1" 00:09:33.769 } 00:09:33.769 ]' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:33.769 { 00:09:33.769 "nbd_device": "/dev/nbd0", 00:09:33.769 "bdev_name": "Malloc0" 00:09:33.769 }, 00:09:33.769 { 00:09:33.769 "nbd_device": "/dev/nbd1", 00:09:33.769 "bdev_name": "Malloc1" 00:09:33.769 } 00:09:33.769 ]' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:33.769 /dev/nbd1' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:33.769 /dev/nbd1' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:33.769 256+0 records in 00:09:33.769 256+0 records out 00:09:33.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110292 s, 95.1 MB/s 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:33.769 256+0 records in 00:09:33.769 256+0 records out 00:09:33.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02793 s, 37.5 MB/s 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.769 18:53:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:33.770 256+0 records in 00:09:33.770 256+0 records out 00:09:33.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286985 s, 36.5 MB/s 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.770 18:53:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.028 18:53:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:34.286 18:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:34.286 18:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:34.286 18:53:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:34.286 18:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.286 18:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.286 18:53:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:34.286 18:53:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:34.286 18:53:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.286 18:53:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.286 18:53:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.286 18:53:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:34.543 18:53:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:34.543 18:53:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:34.801 18:53:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:35.059 [2024-06-10 18:53:49.695101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.059 [2024-06-10 18:53:49.773716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.059 [2024-06-10 18:53:49.773723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.318 [2024-06-10 18:53:49.818322] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:35.318 [2024-06-10 18:53:49.818365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.844 18:53:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1579577 /var/tmp/spdk-nbd.sock 00:09:37.845 18:53:52 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1579577 ']' 00:09:37.845 18:53:52 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.845 18:53:52 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:37.845 18:53:52 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.845 18:53:52 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:37.845 18:53:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:09:38.103 18:53:52 event.app_repeat -- event/event.sh@39 -- # killprocess 1579577 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1579577 ']' 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1579577 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1579577 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1579577' 00:09:38.103 killing process with pid 1579577 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1579577 00:09:38.103 18:53:52 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1579577 00:09:38.361 spdk_app_start is called in Round 0. 00:09:38.362 Shutdown signal received, stop current app iteration 00:09:38.362 Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 reinitialization... 00:09:38.362 spdk_app_start is called in Round 1. 00:09:38.362 Shutdown signal received, stop current app iteration 00:09:38.362 Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 reinitialization... 00:09:38.362 spdk_app_start is called in Round 2. 00:09:38.362 Shutdown signal received, stop current app iteration 00:09:38.362 Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 reinitialization... 00:09:38.362 spdk_app_start is called in Round 3. 00:09:38.362 Shutdown signal received, stop current app iteration 00:09:38.362 18:53:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:38.362 18:53:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:38.362 00:09:38.362 real 0m17.999s 00:09:38.362 user 0m38.832s 00:09:38.362 sys 0m3.608s 00:09:38.362 18:53:52 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:38.362 18:53:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:38.362 ************************************ 00:09:38.362 END TEST app_repeat 00:09:38.362 ************************************ 00:09:38.362 18:53:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:38.362 00:09:38.362 real 0m27.025s 00:09:38.362 user 0m54.204s 00:09:38.362 sys 0m4.916s 00:09:38.362 18:53:52 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:38.362 18:53:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.362 ************************************ 00:09:38.362 END TEST event 00:09:38.362 ************************************ 00:09:38.362 18:53:53 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:09:38.362 18:53:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:38.362 18:53:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:38.362 18:53:53 -- common/autotest_common.sh@10 -- # set +x 00:09:38.362 ************************************ 00:09:38.362 START TEST thread 00:09:38.362 ************************************ 00:09:38.362 18:53:53 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:09:38.620 * Looking for test storage... 00:09:38.620 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:09:38.620 18:53:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:38.620 18:53:53 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:09:38.620 18:53:53 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:38.620 18:53:53 thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.620 ************************************ 00:09:38.620 START TEST thread_poller_perf 00:09:38.620 ************************************ 00:09:38.620 18:53:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:38.620 [2024-06-10 18:53:53.232353] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:38.620 [2024-06-10 18:53:53.232419] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582896 ] 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:38.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.620 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:38.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.621 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:38.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.621 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:38.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.621 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:38.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.621 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:38.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.621 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:38.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.621 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:38.621 [2024-06-10 18:53:53.365687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.878 [2024-06-10 18:53:53.450817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.878 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:39.812 ====================================== 00:09:39.812 busy:2510580812 (cyc) 00:09:39.812 total_run_count: 290000 00:09:39.812 tsc_hz: 2500000000 (cyc) 00:09:39.812 ====================================== 00:09:39.812 poller_cost: 8657 (cyc), 3462 (nsec) 00:09:39.812 00:09:39.812 real 0m1.329s 00:09:39.812 user 0m1.185s 00:09:39.812 sys 0m0.138s 00:09:39.812 18:53:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:39.812 18:53:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.812 ************************************ 00:09:39.812 END TEST thread_poller_perf 00:09:39.812 ************************************ 00:09:40.070 18:53:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:40.070 18:53:54 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:09:40.070 18:53:54 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:40.070 18:53:54 thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.070 ************************************ 00:09:40.070 START TEST thread_poller_perf 00:09:40.070 ************************************ 00:09:40.070 18:53:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:40.070 [2024-06-10 18:53:54.644942] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:40.070 [2024-06-10 18:53:54.645002] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583111 ] 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:40.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.070 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:40.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.071 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:40.071 [2024-06-10 18:53:54.778814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.329 [2024-06-10 18:53:54.862778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.329 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:41.265 ====================================== 00:09:41.265 busy:2502699732 (cyc) 00:09:41.265 total_run_count: 3823000 00:09:41.265 tsc_hz: 2500000000 (cyc) 00:09:41.265 ====================================== 00:09:41.265 poller_cost: 654 (cyc), 261 (nsec) 00:09:41.265 00:09:41.265 real 0m1.323s 00:09:41.265 user 0m1.186s 00:09:41.265 sys 0m0.131s 00:09:41.265 18:53:55 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.265 18:53:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:41.265 ************************************ 00:09:41.265 END TEST thread_poller_perf 00:09:41.265 ************************************ 00:09:41.265 18:53:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:41.265 00:09:41.265 real 0m2.926s 00:09:41.265 user 0m2.464s 00:09:41.265 sys 0m0.475s 00:09:41.265 18:53:55 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.265 18:53:55 thread -- common/autotest_common.sh@10 -- # set +x 00:09:41.265 ************************************ 00:09:41.265 END TEST thread 00:09:41.265 ************************************ 00:09:41.523 18:53:56 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:09:41.523 18:53:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:41.523 18:53:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:41.523 18:53:56 -- common/autotest_common.sh@10 -- # set +x 00:09:41.523 ************************************ 00:09:41.523 START TEST accel 00:09:41.523 ************************************ 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:09:41.523 * Looking for test storage... 00:09:41.523 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:09:41.523 18:53:56 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:41.523 18:53:56 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:41.523 18:53:56 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:41.523 18:53:56 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1583425 00:09:41.523 18:53:56 accel -- accel/accel.sh@63 -- # waitforlisten 1583425 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@830 -- # '[' -z 1583425 ']' 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:41.523 18:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:09:41.523 18:53:56 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:41.523 18:53:56 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:41.523 18:53:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:41.523 18:53:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:41.523 18:53:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.523 18:53:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.523 18:53:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:41.523 18:53:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:41.523 18:53:56 accel -- accel/accel.sh@41 -- # jq -r . 00:09:41.523 [2024-06-10 18:53:56.229095] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:41.524 [2024-06-10 18:53:56.229158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583425 ] 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:41.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.782 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:41.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.783 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:41.783 [2024-06-10 18:53:56.362742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.783 [2024-06-10 18:53:56.448435] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.349 18:53:57 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:42.349 18:53:57 accel -- common/autotest_common.sh@863 -- # return 0 00:09:42.349 18:53:57 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:42.349 18:53:57 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:42.349 18:53:57 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:42.349 18:53:57 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:42.349 18:53:57 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:42.349 18:53:57 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:42.349 18:53:57 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:42.349 18:53:57 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.349 18:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:42.349 18:53:57 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.349 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.349 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.349 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.349 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.349 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.349 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.349 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.349 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.349 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.349 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # IFS== 00:09:42.608 18:53:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:42.608 18:53:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:42.608 18:53:57 accel -- accel/accel.sh@75 -- # killprocess 1583425 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@949 -- # '[' -z 1583425 ']' 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@953 -- # kill -0 1583425 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@954 -- # uname 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1583425 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:42.608 18:53:57 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:42.609 18:53:57 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1583425' 00:09:42.609 killing process with pid 1583425 00:09:42.609 18:53:57 accel -- common/autotest_common.sh@968 -- # kill 1583425 00:09:42.609 18:53:57 accel -- common/autotest_common.sh@973 -- # wait 1583425 00:09:42.866 18:53:57 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:42.866 18:53:57 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:42.866 18:53:57 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:42.866 18:53:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:42.866 18:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:42.866 18:53:57 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:42.866 18:53:57 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:42.866 18:53:57 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:42.866 18:53:57 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:42.866 18:53:57 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:42.866 18:53:57 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:42.866 18:53:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:42.866 18:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:43.125 ************************************ 00:09:43.125 START TEST accel_missing_filename 00:09:43.125 ************************************ 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:43.125 18:53:57 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:43.125 18:53:57 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:43.125 [2024-06-10 18:53:57.686819] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:43.125 [2024-06-10 18:53:57.686875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583701 ] 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:43.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.125 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:43.125 [2024-06-10 18:53:57.821169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.383 [2024-06-10 18:53:57.906095] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.383 [2024-06-10 18:53:57.969903] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.383 [2024-06-10 18:53:58.032946] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:09:43.383 A filename is required. 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:43.383 00:09:43.383 real 0m0.459s 00:09:43.383 user 0m0.295s 00:09:43.383 sys 0m0.192s 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:43.383 18:53:58 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:43.383 ************************************ 00:09:43.383 END TEST accel_missing_filename 00:09:43.383 ************************************ 00:09:43.641 18:53:58 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:43.641 18:53:58 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:09:43.641 18:53:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:43.641 18:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:43.641 ************************************ 00:09:43.641 START TEST accel_compress_verify 00:09:43.641 ************************************ 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:43.641 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:43.641 18:53:58 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:43.641 18:53:58 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:43.641 18:53:58 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:43.642 18:53:58 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:43.642 18:53:58 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.642 18:53:58 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.642 18:53:58 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:43.642 18:53:58 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:43.642 18:53:58 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:43.642 [2024-06-10 18:53:58.218667] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:43.642 [2024-06-10 18:53:58.218719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583959 ] 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:43.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.642 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:43.642 [2024-06-10 18:53:58.350025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.900 [2024-06-10 18:53:58.435058] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.900 [2024-06-10 18:53:58.502561] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.900 [2024-06-10 18:53:58.568700] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:09:43.900 00:09:43.900 Compression does not support the verify option, aborting. 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:43.900 00:09:43.900 real 0m0.462s 00:09:43.900 user 0m0.295s 00:09:43.900 sys 0m0.191s 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:43.900 18:53:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:43.900 ************************************ 00:09:43.900 END TEST accel_compress_verify 00:09:43.900 ************************************ 00:09:44.158 18:53:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:44.158 18:53:58 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:44.158 18:53:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:44.158 18:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.158 ************************************ 00:09:44.158 START TEST accel_wrong_workload 00:09:44.158 ************************************ 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.158 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:44.158 18:53:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:44.158 Unsupported workload type: foobar 00:09:44.158 [2024-06-10 18:53:58.755053] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:44.158 accel_perf options: 00:09:44.158 [-h help message] 00:09:44.158 [-q queue depth per core] 00:09:44.158 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:44.158 [-T number of threads per core 00:09:44.158 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:44.158 [-t time in seconds] 00:09:44.158 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:44.158 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:44.158 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:44.158 [-l for compress/decompress workloads, name of uncompressed input file 00:09:44.158 [-S for crc32c workload, use this seed value (default 0) 00:09:44.158 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:44.158 [-f for fill workload, use this BYTE value (default 255) 00:09:44.158 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:44.158 [-y verify result if this switch is on] 00:09:44.158 [-a tasks to allocate per core (default: same value as -q)] 00:09:44.159 Can be used to spread operations across a wider range of memory. 00:09:44.159 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:09:44.159 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:44.159 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:44.159 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:44.159 00:09:44.159 real 0m0.039s 00:09:44.159 user 0m0.021s 00:09:44.159 sys 0m0.018s 00:09:44.159 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:44.159 18:53:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:44.159 ************************************ 00:09:44.159 END TEST accel_wrong_workload 00:09:44.159 ************************************ 00:09:44.159 Error: writing output failed: Broken pipe 00:09:44.159 18:53:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:44.159 18:53:58 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:09:44.159 18:53:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:44.159 18:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.159 ************************************ 00:09:44.159 START TEST accel_negative_buffers 00:09:44.159 ************************************ 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:44.159 18:53:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:44.159 -x option must be non-negative. 00:09:44.159 [2024-06-10 18:53:58.865354] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:44.159 accel_perf options: 00:09:44.159 [-h help message] 00:09:44.159 [-q queue depth per core] 00:09:44.159 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:44.159 [-T number of threads per core 00:09:44.159 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:44.159 [-t time in seconds] 00:09:44.159 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:44.159 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:44.159 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:44.159 [-l for compress/decompress workloads, name of uncompressed input file 00:09:44.159 [-S for crc32c workload, use this seed value (default 0) 00:09:44.159 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:44.159 [-f for fill workload, use this BYTE value (default 255) 00:09:44.159 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:44.159 [-y verify result if this switch is on] 00:09:44.159 [-a tasks to allocate per core (default: same value as -q)] 00:09:44.159 Can be used to spread operations across a wider range of memory. 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:44.159 00:09:44.159 real 0m0.039s 00:09:44.159 user 0m0.025s 00:09:44.159 sys 0m0.013s 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:44.159 18:53:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:44.159 ************************************ 00:09:44.159 END TEST accel_negative_buffers 00:09:44.159 ************************************ 00:09:44.159 Error: writing output failed: Broken pipe 00:09:44.159 18:53:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:44.159 18:53:58 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:44.159 18:53:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:44.159 18:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.418 ************************************ 00:09:44.418 START TEST accel_crc32c 00:09:44.418 ************************************ 00:09:44.418 18:53:58 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:44.418 18:53:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:44.418 [2024-06-10 18:53:58.972910] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:44.418 [2024-06-10 18:53:58.972963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584029 ] 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:44.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.418 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:44.418 [2024-06-10 18:53:59.105378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.678 [2024-06-10 18:53:59.189683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:44.678 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:44.679 18:53:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.055 18:54:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:46.055 18:54:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:46.055 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:46.056 18:54:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:46.056 00:09:46.056 real 0m1.458s 00:09:46.056 user 0m0.006s 00:09:46.056 sys 0m0.002s 00:09:46.056 18:54:00 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:46.056 18:54:00 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:46.056 ************************************ 00:09:46.056 END TEST accel_crc32c 00:09:46.056 ************************************ 00:09:46.056 18:54:00 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:46.056 18:54:00 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:46.056 18:54:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:46.056 18:54:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:46.056 ************************************ 00:09:46.056 START TEST accel_crc32c_C2 00:09:46.056 ************************************ 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:46.056 [2024-06-10 18:54:00.502551] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:46.056 [2024-06-10 18:54:00.502615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584347 ] 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:46.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.056 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:46.056 [2024-06-10 18:54:00.636603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.056 [2024-06-10 18:54:00.723245] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.056 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:46.057 18:54:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:47.431 00:09:47.431 real 0m1.458s 00:09:47.431 user 0m0.008s 00:09:47.431 sys 0m0.001s 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:47.431 18:54:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:47.431 ************************************ 00:09:47.431 END TEST accel_crc32c_C2 00:09:47.431 ************************************ 00:09:47.431 18:54:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:47.431 18:54:01 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:47.431 18:54:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:47.431 18:54:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:47.431 ************************************ 00:09:47.431 START TEST accel_copy 00:09:47.431 ************************************ 00:09:47.431 18:54:02 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:47.431 18:54:02 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:47.432 18:54:02 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:47.432 [2024-06-10 18:54:02.036013] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:47.432 [2024-06-10 18:54:02.036068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584725 ] 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:47.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:47.432 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:47.432 [2024-06-10 18:54:02.166132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.690 [2024-06-10 18:54:02.250355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.690 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:47.691 18:54:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:49.062 18:54:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:49.062 00:09:49.062 real 0m1.452s 00:09:49.062 user 0m0.007s 00:09:49.062 sys 0m0.001s 00:09:49.062 18:54:03 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:49.062 18:54:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:49.062 ************************************ 00:09:49.062 END TEST accel_copy 00:09:49.062 ************************************ 00:09:49.062 18:54:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.062 18:54:03 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:49.062 18:54:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:49.062 18:54:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:49.062 ************************************ 00:09:49.062 START TEST accel_fill 00:09:49.062 ************************************ 00:09:49.062 18:54:03 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:49.062 18:54:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:49.062 [2024-06-10 18:54:03.569125] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:49.062 [2024-06-10 18:54:03.569181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585006 ] 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:49.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.062 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:49.062 [2024-06-10 18:54:03.704500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.062 [2024-06-10 18:54:03.788492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.320 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:49.321 18:54:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:50.252 18:54:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:50.252 00:09:50.252 real 0m1.462s 00:09:50.252 user 0m0.007s 00:09:50.252 sys 0m0.003s 00:09:50.252 18:54:04 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:50.252 18:54:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:50.252 ************************************ 00:09:50.252 END TEST accel_fill 00:09:50.252 ************************************ 00:09:50.509 18:54:05 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:50.509 18:54:05 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:50.509 18:54:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:50.509 18:54:05 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.509 ************************************ 00:09:50.509 START TEST accel_copy_crc32c 00:09:50.509 ************************************ 00:09:50.509 18:54:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:09:50.509 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:50.509 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:50.509 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.509 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:50.510 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:50.510 [2024-06-10 18:54:05.103317] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:50.510 [2024-06-10 18:54:05.103375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585602 ] 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:50.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:50.510 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:50.510 [2024-06-10 18:54:05.234966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.769 [2024-06-10 18:54:05.319327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:50.769 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:50.770 18:54:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.144 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:52.145 00:09:52.145 real 0m1.461s 00:09:52.145 user 0m0.007s 00:09:52.145 sys 0m0.002s 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:52.145 18:54:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:52.145 ************************************ 00:09:52.145 END TEST accel_copy_crc32c 00:09:52.145 ************************************ 00:09:52.145 18:54:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:52.145 18:54:06 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:52.145 18:54:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:52.145 18:54:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:52.145 ************************************ 00:09:52.145 START TEST accel_copy_crc32c_C2 00:09:52.145 ************************************ 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:52.145 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:52.145 [2024-06-10 18:54:06.633358] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:52.145 [2024-06-10 18:54:06.633412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586007 ] 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:52.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.145 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:52.145 [2024-06-10 18:54:06.767959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.145 [2024-06-10 18:54:06.851563] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.403 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:52.404 18:54:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:53.336 00:09:53.336 real 0m1.450s 00:09:53.336 user 0m0.008s 00:09:53.336 sys 0m0.001s 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:53.336 18:54:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:53.336 ************************************ 00:09:53.336 END TEST accel_copy_crc32c_C2 00:09:53.336 ************************************ 00:09:53.336 18:54:08 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:53.336 18:54:08 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:53.336 18:54:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:53.336 18:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:09:53.594 ************************************ 00:09:53.594 START TEST accel_dualcast 00:09:53.594 ************************************ 00:09:53.594 18:54:08 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:53.594 18:54:08 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:53.594 [2024-06-10 18:54:08.159766] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:53.594 [2024-06-10 18:54:08.159820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586298 ] 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:53.594 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.594 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:53.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.595 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:53.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.595 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:53.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.595 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:53.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.595 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:53.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.595 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:53.595 [2024-06-10 18:54:08.294689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.853 [2024-06-10 18:54:08.379372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:53.853 18:54:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:55.229 18:54:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:55.229 00:09:55.229 real 0m1.462s 00:09:55.229 user 0m0.006s 00:09:55.229 sys 0m0.003s 00:09:55.229 18:54:09 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:55.229 18:54:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:55.229 ************************************ 00:09:55.229 END TEST accel_dualcast 00:09:55.229 ************************************ 00:09:55.229 18:54:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:55.229 18:54:09 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:55.229 18:54:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:55.229 18:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:09:55.229 ************************************ 00:09:55.229 START TEST accel_compare 00:09:55.229 ************************************ 00:09:55.229 18:54:09 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:55.229 18:54:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:55.229 [2024-06-10 18:54:09.694562] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:55.229 [2024-06-10 18:54:09.694627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586578 ] 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.229 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:55.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:55.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:55.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:55.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:55.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:55.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:55.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:55.230 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:55.230 [2024-06-10 18:54:09.825422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.230 [2024-06-10 18:54:09.911665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:55.230 18:54:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:56.658 18:54:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:56.658 00:09:56.658 real 0m1.458s 00:09:56.658 user 0m0.008s 00:09:56.658 sys 0m0.000s 00:09:56.658 18:54:11 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:56.658 18:54:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:56.658 ************************************ 00:09:56.658 END TEST accel_compare 00:09:56.658 ************************************ 00:09:56.658 18:54:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:56.658 18:54:11 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:56.658 18:54:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:56.658 18:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.658 ************************************ 00:09:56.658 START TEST accel_xor 00:09:56.658 ************************************ 00:09:56.658 18:54:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:56.658 18:54:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:56.658 [2024-06-10 18:54:11.223944] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:56.658 [2024-06-10 18:54:11.223997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586868 ] 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:56.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.658 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.659 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:56.659 [2024-06-10 18:54:11.353905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.917 [2024-06-10 18:54:11.438063] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:56.917 18:54:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:58.291 00:09:58.291 real 0m1.450s 00:09:58.291 user 0m0.008s 00:09:58.291 sys 0m0.000s 00:09:58.291 18:54:12 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:58.291 18:54:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 ************************************ 00:09:58.291 END TEST accel_xor 00:09:58.291 ************************************ 00:09:58.291 18:54:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:58.291 18:54:12 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:58.291 18:54:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:58.291 18:54:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 ************************************ 00:09:58.291 START TEST accel_xor 00:09:58.291 ************************************ 00:09:58.291 18:54:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:58.291 18:54:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:58.291 [2024-06-10 18:54:12.756818] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:58.291 [2024-06-10 18:54:12.756874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587153 ] 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:58.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.291 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:58.291 [2024-06-10 18:54:12.889769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.291 [2024-06-10 18:54:12.973898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.291 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:58.292 18:54:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:59.666 18:54:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.666 00:09:59.666 real 0m1.459s 00:09:59.666 user 0m0.008s 00:09:59.666 sys 0m0.002s 00:09:59.666 18:54:14 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:59.666 18:54:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:59.666 ************************************ 00:09:59.666 END TEST accel_xor 00:09:59.666 ************************************ 00:09:59.666 18:54:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:59.666 18:54:14 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:09:59.666 18:54:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:59.666 18:54:14 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.666 ************************************ 00:09:59.666 START TEST accel_dif_verify 00:09:59.666 ************************************ 00:09:59.666 18:54:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:09:59.666 18:54:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:59.666 18:54:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:59.666 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.666 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.666 18:54:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:59.666 18:54:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:59.667 18:54:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:59.667 [2024-06-10 18:54:14.286845] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:09:59.667 [2024-06-10 18:54:14.286896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587441 ] 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.0 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.1 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.2 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.3 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.4 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.5 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.6 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:01.7 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.0 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.1 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.2 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.3 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.4 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.5 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.6 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b6:02.7 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.0 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.1 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.2 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.3 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.4 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.5 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.6 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:01.7 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.0 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.1 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.2 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.3 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.4 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.5 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.6 cannot be used 00:09:59.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.667 EAL: Requested device 0000:b8:02.7 cannot be used 00:09:59.667 [2024-06-10 18:54:14.418367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.925 [2024-06-10 18:54:14.501790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:59.925 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:59.926 18:54:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:01.298 18:54:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:01.298 00:10:01.298 real 0m1.454s 00:10:01.298 user 0m0.008s 00:10:01.298 sys 0m0.001s 00:10:01.298 18:54:15 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:01.298 18:54:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:01.298 ************************************ 00:10:01.298 END TEST accel_dif_verify 00:10:01.298 ************************************ 00:10:01.298 18:54:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:01.298 18:54:15 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:10:01.298 18:54:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:01.298 18:54:15 accel -- common/autotest_common.sh@10 -- # set +x 00:10:01.298 ************************************ 00:10:01.298 START TEST accel_dif_generate 00:10:01.298 ************************************ 00:10:01.298 18:54:15 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:01.298 18:54:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:01.298 [2024-06-10 18:54:15.813715] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:01.298 [2024-06-10 18:54:15.813767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587720 ] 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.298 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:01.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:01.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.299 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:01.299 [2024-06-10 18:54:15.944974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.299 [2024-06-10 18:54:16.028893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.556 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:01.557 18:54:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:02.488 18:54:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.488 00:10:02.488 real 0m1.454s 00:10:02.488 user 0m0.006s 00:10:02.488 sys 0m0.003s 00:10:02.488 18:54:17 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:02.488 18:54:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:02.488 ************************************ 00:10:02.488 END TEST accel_dif_generate 00:10:02.488 ************************************ 00:10:02.746 18:54:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:02.746 18:54:17 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:10:02.746 18:54:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:02.746 18:54:17 accel -- common/autotest_common.sh@10 -- # set +x 00:10:02.746 ************************************ 00:10:02.746 START TEST accel_dif_generate_copy 00:10:02.746 ************************************ 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:02.746 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:02.746 [2024-06-10 18:54:17.342348] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:02.746 [2024-06-10 18:54:17.342401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588013 ] 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:02.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.746 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:02.746 [2024-06-10 18:54:17.473217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.004 [2024-06-10 18:54:17.556798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:03.004 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:03.005 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:03.005 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:03.005 18:54:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.377 00:10:04.377 real 0m1.454s 00:10:04.377 user 0m0.009s 00:10:04.377 sys 0m0.000s 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:04.377 18:54:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:04.377 ************************************ 00:10:04.377 END TEST accel_dif_generate_copy 00:10:04.377 ************************************ 00:10:04.377 18:54:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:04.377 18:54:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:04.377 18:54:18 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:10:04.377 18:54:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:04.377 18:54:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:04.377 ************************************ 00:10:04.377 START TEST accel_comp 00:10:04.377 ************************************ 00:10:04.377 18:54:18 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:04.377 18:54:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:04.377 [2024-06-10 18:54:18.868854] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:04.377 [2024-06-10 18:54:18.868905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588295 ] 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.377 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:04.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:04.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.378 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:04.378 [2024-06-10 18:54:18.999741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.378 [2024-06-10 18:54:19.083290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.634 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.635 18:54:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:05.566 18:54:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:05.566 00:10:05.566 real 0m1.450s 00:10:05.566 user 0m0.008s 00:10:05.566 sys 0m0.001s 00:10:05.566 18:54:20 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:05.566 18:54:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:05.566 ************************************ 00:10:05.566 END TEST accel_comp 00:10:05.566 ************************************ 00:10:05.825 18:54:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.825 18:54:20 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:10:05.825 18:54:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:05.825 18:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:10:05.825 ************************************ 00:10:05.825 START TEST accel_decomp 00:10:05.825 ************************************ 00:10:05.825 18:54:20 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:05.825 18:54:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:05.825 [2024-06-10 18:54:20.390664] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:05.825 [2024-06-10 18:54:20.390717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588582 ] 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:05.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.825 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:05.825 [2024-06-10 18:54:20.522629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.084 [2024-06-10 18:54:20.606759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:06.084 18:54:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:07.455 18:54:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.455 00:10:07.455 real 0m1.455s 00:10:07.455 user 0m0.008s 00:10:07.455 sys 0m0.001s 00:10:07.455 18:54:21 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:07.455 18:54:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:07.455 ************************************ 00:10:07.455 END TEST accel_decomp 00:10:07.455 ************************************ 00:10:07.455 18:54:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:07.455 18:54:21 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:10:07.455 18:54:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:07.455 18:54:21 accel -- common/autotest_common.sh@10 -- # set +x 00:10:07.455 ************************************ 00:10:07.455 START TEST accel_decomp_full 00:10:07.455 ************************************ 00:10:07.455 18:54:21 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:07.455 18:54:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:07.456 18:54:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:07.456 [2024-06-10 18:54:21.917397] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:07.456 [2024-06-10 18:54:21.917450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588863 ] 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:07.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.456 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:07.456 [2024-06-10 18:54:22.050743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.456 [2024-06-10 18:54:22.136000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.456 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:07.457 18:54:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:08.828 18:54:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.828 00:10:08.828 real 0m1.468s 00:10:08.828 user 0m0.008s 00:10:08.828 sys 0m0.001s 00:10:08.828 18:54:23 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:08.828 18:54:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:08.828 ************************************ 00:10:08.828 END TEST accel_decomp_full 00:10:08.828 ************************************ 00:10:08.828 18:54:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.828 18:54:23 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:10:08.828 18:54:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:08.828 18:54:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:08.828 ************************************ 00:10:08.828 START TEST accel_decomp_mcore 00:10:08.829 ************************************ 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:08.829 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:08.829 [2024-06-10 18:54:23.463101] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:08.829 [2024-06-10 18:54:23.463155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589154 ] 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:08.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.829 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:09.087 [2024-06-10 18:54:23.594916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.087 [2024-06-10 18:54:23.682451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.087 [2024-06-10 18:54:23.682545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.087 [2024-06-10 18:54:23.682661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.087 [2024-06-10 18:54:23.682662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:09.087 18:54:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.461 00:10:10.461 real 0m1.468s 00:10:10.461 user 0m4.657s 00:10:10.461 sys 0m0.193s 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:10.461 18:54:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 ************************************ 00:10:10.461 END TEST accel_decomp_mcore 00:10:10.461 ************************************ 00:10:10.461 18:54:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:10.461 18:54:24 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:10.461 18:54:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:10.461 18:54:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 ************************************ 00:10:10.461 START TEST accel_decomp_full_mcore 00:10:10.461 ************************************ 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:10.461 18:54:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:10.461 [2024-06-10 18:54:25.011429] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:10.461 [2024-06-10 18:54:25.011487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589436 ] 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.461 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:10.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:10.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.462 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:10.462 [2024-06-10 18:54:25.145222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.720 [2024-06-10 18:54:25.233607] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.720 [2024-06-10 18:54:25.233700] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.720 [2024-06-10 18:54:25.233813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.720 [2024-06-10 18:54:25.233814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.720 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:10.721 18:54:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.095 00:10:12.095 real 0m1.493s 00:10:12.095 user 0m4.725s 00:10:12.095 sys 0m0.206s 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:12.095 18:54:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:12.095 ************************************ 00:10:12.095 END TEST accel_decomp_full_mcore 00:10:12.095 ************************************ 00:10:12.095 18:54:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:12.095 18:54:26 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:10:12.095 18:54:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:12.095 18:54:26 accel -- common/autotest_common.sh@10 -- # set +x 00:10:12.095 ************************************ 00:10:12.095 START TEST accel_decomp_mthread 00:10:12.095 ************************************ 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:12.095 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:12.095 [2024-06-10 18:54:26.585518] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:12.095 [2024-06-10 18:54:26.585574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589729 ] 00:10:12.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.095 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:12.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.095 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:12.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.095 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:12.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.095 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:12.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:12.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.096 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:12.096 [2024-06-10 18:54:26.718212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.096 [2024-06-10 18:54:26.801776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.354 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:12.355 18:54:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.288 00:10:13.288 real 0m1.477s 00:10:13.288 user 0m1.286s 00:10:13.288 sys 0m0.191s 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:13.288 18:54:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:13.288 ************************************ 00:10:13.288 END TEST accel_decomp_mthread 00:10:13.288 ************************************ 00:10:13.546 18:54:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:13.546 18:54:28 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:13.546 18:54:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:13.546 18:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 ************************************ 00:10:13.546 START TEST accel_decomp_full_mthread 00:10:13.546 ************************************ 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:13.546 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:13.546 [2024-06-10 18:54:28.144630] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:13.546 [2024-06-10 18:54:28.144689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590013 ] 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:13.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.546 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:13.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.547 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:13.547 [2024-06-10 18:54:28.277802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.804 [2024-06-10 18:54:28.362089] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:13.805 18:54:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.179 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:15.180 00:10:15.180 real 0m1.501s 00:10:15.180 user 0m1.309s 00:10:15.180 sys 0m0.196s 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:15.180 18:54:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:15.180 ************************************ 00:10:15.180 END TEST accel_decomp_full_mthread 00:10:15.180 ************************************ 00:10:15.180 18:54:29 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:10:15.180 18:54:29 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:10:15.180 18:54:29 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:10:15.180 18:54:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:15.180 18:54:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1590295 00:10:15.180 18:54:29 accel -- accel/accel.sh@63 -- # waitforlisten 1590295 00:10:15.180 18:54:29 accel -- common/autotest_common.sh@830 -- # '[' -z 1590295 ']' 00:10:15.180 18:54:29 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.180 18:54:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:15.180 18:54:29 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:15.180 18:54:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:15.180 18:54:29 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.180 18:54:29 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:15.180 18:54:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:15.180 18:54:29 accel -- common/autotest_common.sh@10 -- # set +x 00:10:15.180 18:54:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:15.180 18:54:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.180 18:54:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.180 18:54:29 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:15.180 18:54:29 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:15.180 18:54:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:15.180 18:54:29 accel -- accel/accel.sh@41 -- # jq -r . 00:10:15.180 [2024-06-10 18:54:29.719072] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:15.180 [2024-06-10 18:54:29.719136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590295 ] 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:15.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.180 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:15.180 [2024-06-10 18:54:29.850797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.180 [2024-06-10 18:54:29.934137] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.113 [2024-06-10 18:54:30.638726] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:16.113 18:54:30 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:16.113 18:54:30 accel -- common/autotest_common.sh@863 -- # return 0 00:10:16.113 18:54:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:16.113 18:54:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:16.113 18:54:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:16.113 18:54:30 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:10:16.113 18:54:30 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:10:16.113 18:54:30 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:10:16.113 18:54:30 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.113 18:54:30 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:10:16.113 18:54:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.113 18:54:30 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:10:16.371 18:54:30 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.371 "method": "compressdev_scan_accel_module", 00:10:16.371 18:54:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:16.371 18:54:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:16.371 18:54:30 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:16.371 18:54:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.371 18:54:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:16.371 18:54:30 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:16.371 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.371 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.371 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.371 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.371 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.371 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.371 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.371 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # IFS== 00:10:16.372 18:54:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:16.372 18:54:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:16.372 18:54:31 accel -- accel/accel.sh@75 -- # killprocess 1590295 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@949 -- # '[' -z 1590295 ']' 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@953 -- # kill -0 1590295 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@954 -- # uname 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1590295 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1590295' 00:10:16.372 killing process with pid 1590295 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@968 -- # kill 1590295 00:10:16.372 18:54:31 accel -- common/autotest_common.sh@973 -- # wait 1590295 00:10:16.938 18:54:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:16.938 18:54:31 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:16.938 18:54:31 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:10:16.938 18:54:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:16.938 18:54:31 accel -- common/autotest_common.sh@10 -- # set +x 00:10:16.938 ************************************ 00:10:16.938 START TEST accel_cdev_comp 00:10:16.938 ************************************ 00:10:16.938 18:54:31 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:16.938 18:54:31 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:10:16.938 [2024-06-10 18:54:31.509107] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:16.938 [2024-06-10 18:54:31.509161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590588 ] 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:16.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.938 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:16.938 [2024-06-10 18:54:31.629346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.196 [2024-06-10 18:54:31.712899] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.761 [2024-06-10 18:54:32.417802] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:17.761 [2024-06-10 18:54:32.420207] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xf32510 PMD being used: compress_qat 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 [2024-06-10 18:54:32.424030] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xf372e0 PMD being used: compress_qat 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:17.761 18:54:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:19.149 18:54:33 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:19.149 00:10:19.149 real 0m2.103s 00:10:19.149 user 0m1.579s 00:10:19.149 sys 0m0.528s 00:10:19.149 18:54:33 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:19.149 18:54:33 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:10:19.149 ************************************ 00:10:19.149 END TEST accel_cdev_comp 00:10:19.149 ************************************ 00:10:19.149 18:54:33 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:19.149 18:54:33 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:10:19.149 18:54:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:19.149 18:54:33 accel -- common/autotest_common.sh@10 -- # set +x 00:10:19.149 ************************************ 00:10:19.149 START TEST accel_cdev_decomp 00:10:19.149 ************************************ 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:19.149 18:54:33 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:19.149 [2024-06-10 18:54:33.687722] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:19.149 [2024-06-10 18:54:33.687780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591031 ] 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:19.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.149 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:19.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.150 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:19.150 [2024-06-10 18:54:33.824427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.437 [2024-06-10 18:54:33.909372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.006 [2024-06-10 18:54:34.612563] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:20.006 [2024-06-10 18:54:34.614991] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2227510 PMD being used: compress_qat 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.006 [2024-06-10 18:54:34.618891] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x222c2e0 PMD being used: compress_qat 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.006 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:20.007 18:54:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:21.382 00:10:21.382 real 0m2.121s 00:10:21.382 user 0m1.581s 00:10:21.382 sys 0m0.539s 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:21.382 18:54:35 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:21.382 ************************************ 00:10:21.382 END TEST accel_cdev_decomp 00:10:21.382 ************************************ 00:10:21.382 18:54:35 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:21.382 18:54:35 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:10:21.382 18:54:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:21.382 18:54:35 accel -- common/autotest_common.sh@10 -- # set +x 00:10:21.382 ************************************ 00:10:21.382 START TEST accel_cdev_decomp_full 00:10:21.382 ************************************ 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:21.382 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:21.383 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:21.383 18:54:35 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:21.383 [2024-06-10 18:54:35.890898] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:21.383 [2024-06-10 18:54:35.890954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591435 ] 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:21.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.383 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:21.383 [2024-06-10 18:54:36.024103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.383 [2024-06-10 18:54:36.110331] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.317 [2024-06-10 18:54:36.807907] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:22.317 [2024-06-10 18:54:36.810304] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e7a510 PMD being used: compress_qat 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.317 [2024-06-10 18:54:36.813440] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e7a5b0 PMD being used: compress_qat 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:22.317 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:22.318 18:54:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:23.251 00:10:23.251 real 0m2.113s 00:10:23.251 user 0m1.575s 00:10:23.251 sys 0m0.539s 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:23.251 18:54:37 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:23.251 ************************************ 00:10:23.251 END TEST accel_cdev_decomp_full 00:10:23.251 ************************************ 00:10:23.509 18:54:38 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.509 18:54:38 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:10:23.509 18:54:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:23.509 18:54:38 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.509 ************************************ 00:10:23.509 START TEST accel_cdev_decomp_mcore 00:10:23.509 ************************************ 00:10:23.509 18:54:38 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.509 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:23.510 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:23.510 [2024-06-10 18:54:38.081068] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:23.510 [2024-06-10 18:54:38.081124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591735 ] 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:23.510 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.510 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:23.510 [2024-06-10 18:54:38.213514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.767 [2024-06-10 18:54:38.301332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.767 [2024-06-10 18:54:38.301427] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.767 [2024-06-10 18:54:38.301542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.767 [2024-06-10 18:54:38.301543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.331 [2024-06-10 18:54:38.990888] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:24.331 [2024-06-10 18:54:38.993260] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1db8bb0 PMD being used: compress_qat 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 [2024-06-10 18:54:38.998475] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa35819b8b0 PMD being used: compress_qat 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:24.331 [2024-06-10 18:54:38.999321] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa35019b8b0 PMD being used: compress_qat 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 [2024-06-10 18:54:39.000075] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dbe0f0 PMD being used: compress_qat 00:10:24.331 [2024-06-10 18:54:39.000216] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa34819b8b0 PMD being used: compress_qat 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:24.331 18:54:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.701 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:25.702 00:10:25.702 real 0m2.150s 00:10:25.702 user 0m6.960s 00:10:25.702 sys 0m0.548s 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:25.702 18:54:40 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:25.702 ************************************ 00:10:25.702 END TEST accel_cdev_decomp_mcore 00:10:25.702 ************************************ 00:10:25.702 18:54:40 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:25.702 18:54:40 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:25.702 18:54:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:25.702 18:54:40 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.702 ************************************ 00:10:25.702 START TEST accel_cdev_decomp_full_mcore 00:10:25.702 ************************************ 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:25.702 18:54:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:25.702 [2024-06-10 18:54:40.313049] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:25.702 [2024-06-10 18:54:40.313109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592267 ] 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.702 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:25.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:25.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:25.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:25.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:25.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:25.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:25.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.703 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:25.703 [2024-06-10 18:54:40.448352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.960 [2024-06-10 18:54:40.535412] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.960 [2024-06-10 18:54:40.535524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.960 [2024-06-10 18:54:40.535636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.960 [2024-06-10 18:54:40.535638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.523 [2024-06-10 18:54:41.223951] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:26.524 [2024-06-10 18:54:41.226346] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x206cbb0 PMD being used: compress_qat 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 [2024-06-10 18:54:41.230735] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa26c19b8b0 PMD being used: compress_qat 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:26.524 [2024-06-10 18:54:41.231472] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa26419b8b0 PMD being used: compress_qat 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 [2024-06-10 18:54:41.232380] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x206cc50 PMD being used: compress_qat 00:10:26.524 [2024-06-10 18:54:41.232545] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa25c19b8b0 PMD being used: compress_qat 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:26.524 18:54:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.894 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.894 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.894 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:27.895 00:10:27.895 real 0m2.124s 00:10:27.895 user 0m6.898s 00:10:27.895 sys 0m0.553s 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:27.895 18:54:42 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:27.895 ************************************ 00:10:27.895 END TEST accel_cdev_decomp_full_mcore 00:10:27.895 ************************************ 00:10:27.895 18:54:42 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:27.895 18:54:42 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:10:27.895 18:54:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:27.895 18:54:42 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.895 ************************************ 00:10:27.895 START TEST accel_cdev_decomp_mthread 00:10:27.895 ************************************ 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:27.895 18:54:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:27.895 [2024-06-10 18:54:42.520132] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:27.895 [2024-06-10 18:54:42.520188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592569 ] 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:27.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.895 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:27.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.896 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:28.154 [2024-06-10 18:54:42.652679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.154 [2024-06-10 18:54:42.735939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.719 [2024-06-10 18:54:43.423002] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:28.719 [2024-06-10 18:54:43.425361] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26c9510 PMD being used: compress_qat 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.719 [2024-06-10 18:54:43.429945] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26ce690 PMD being used: compress_qat 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.719 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.719 [2024-06-10 18:54:43.432326] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x27f14f0 PMD being used: compress_qat 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:28.720 18:54:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.092 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:30.093 00:10:30.093 real 0m2.104s 00:10:30.093 user 0m1.596s 00:10:30.093 sys 0m0.513s 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:30.093 18:54:44 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:30.093 ************************************ 00:10:30.093 END TEST accel_cdev_decomp_mthread 00:10:30.093 ************************************ 00:10:30.093 18:54:44 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:30.093 18:54:44 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:10:30.093 18:54:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:30.093 18:54:44 accel -- common/autotest_common.sh@10 -- # set +x 00:10:30.093 ************************************ 00:10:30.093 START TEST accel_cdev_decomp_full_mthread 00:10:30.093 ************************************ 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:30.093 18:54:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:30.093 [2024-06-10 18:54:44.706190] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:30.093 [2024-06-10 18:54:44.706249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593023 ] 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:30.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.093 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:30.093 [2024-06-10 18:54:44.841023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.351 [2024-06-10 18:54:44.925422] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.916 [2024-06-10 18:54:45.623980] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:30.916 [2024-06-10 18:54:45.626385] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x141b510 PMD being used: compress_qat 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 [2024-06-10 18:54:45.630203] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x141b5b0 PMD being used: compress_qat 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.916 [2024-06-10 18:54:45.632802] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1543100 PMD being used: compress_qat 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.916 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:30.917 18:54:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:32.290 00:10:32.290 real 0m2.121s 00:10:32.290 user 0m1.581s 00:10:32.290 sys 0m0.539s 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:32.290 18:54:46 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:32.290 ************************************ 00:10:32.291 END TEST accel_cdev_decomp_full_mthread 00:10:32.291 ************************************ 00:10:32.291 18:54:46 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:10:32.291 18:54:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:32.291 18:54:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:32.291 18:54:46 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:32.291 18:54:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:32.291 18:54:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:32.291 18:54:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:32.291 18:54:46 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.291 18:54:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.291 18:54:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.291 18:54:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:32.291 18:54:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:32.291 18:54:46 accel -- accel/accel.sh@41 -- # jq -r . 00:10:32.291 ************************************ 00:10:32.291 START TEST accel_dif_functional_tests 00:10:32.291 ************************************ 00:10:32.291 18:54:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:32.291 [2024-06-10 18:54:46.932127] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:32.291 [2024-06-10 18:54:46.932184] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593402 ] 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:32.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.291 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:32.549 [2024-06-10 18:54:47.066948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.549 [2024-06-10 18:54:47.153019] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.549 [2024-06-10 18:54:47.153111] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.549 [2024-06-10 18:54:47.153116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.549 00:10:32.549 00:10:32.549 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.549 http://cunit.sourceforge.net/ 00:10:32.549 00:10:32.549 00:10:32.549 Suite: accel_dif 00:10:32.549 Test: verify: DIF generated, GUARD check ...passed 00:10:32.549 Test: verify: DIF generated, APPTAG check ...passed 00:10:32.549 Test: verify: DIF generated, REFTAG check ...passed 00:10:32.549 Test: verify: DIF not generated, GUARD check ...[2024-06-10 18:54:47.239421] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:32.549 passed 00:10:32.549 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 18:54:47.239487] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:32.549 passed 00:10:32.549 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 18:54:47.239521] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:32.549 passed 00:10:32.549 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:32.549 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 18:54:47.239590] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:32.549 passed 00:10:32.549 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:32.549 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:32.549 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:32.549 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 18:54:47.239732] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:32.549 passed 00:10:32.549 Test: verify copy: DIF generated, GUARD check ...passed 00:10:32.549 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:32.549 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:32.549 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 18:54:47.239879] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:32.549 passed 00:10:32.549 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 18:54:47.239910] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:32.549 passed 00:10:32.549 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 18:54:47.239941] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:32.549 passed 00:10:32.549 Test: generate copy: DIF generated, GUARD check ...passed 00:10:32.549 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:32.549 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:32.549 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:32.549 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:32.549 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:32.549 Test: generate copy: iovecs-len validate ...[2024-06-10 18:54:47.240163] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:32.549 passed 00:10:32.549 Test: generate copy: buffer alignment validate ...passed 00:10:32.549 00:10:32.549 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.549 suites 1 1 n/a 0 0 00:10:32.549 tests 26 26 26 0 0 00:10:32.550 asserts 115 115 115 0 n/a 00:10:32.550 00:10:32.550 Elapsed time = 0.002 seconds 00:10:32.808 00:10:32.808 real 0m0.551s 00:10:32.808 user 0m0.720s 00:10:32.808 sys 0m0.220s 00:10:32.808 18:54:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:32.808 18:54:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:32.808 ************************************ 00:10:32.808 END TEST accel_dif_functional_tests 00:10:32.808 ************************************ 00:10:32.808 00:10:32.808 real 0m51.405s 00:10:32.808 user 0m59.312s 00:10:32.808 sys 0m11.318s 00:10:32.808 18:54:47 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:32.808 18:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.808 ************************************ 00:10:32.808 END TEST accel 00:10:32.808 ************************************ 00:10:32.808 18:54:47 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:32.808 18:54:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:32.808 18:54:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:32.808 18:54:47 -- common/autotest_common.sh@10 -- # set +x 00:10:32.808 ************************************ 00:10:32.808 START TEST accel_rpc 00:10:32.808 ************************************ 00:10:32.808 18:54:47 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:33.067 * Looking for test storage... 00:10:33.067 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:33.067 18:54:47 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:33.067 18:54:47 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1593579 00:10:33.067 18:54:47 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1593579 00:10:33.067 18:54:47 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:33.067 18:54:47 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1593579 ']' 00:10:33.067 18:54:47 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.067 18:54:47 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:33.067 18:54:47 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.067 18:54:47 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:33.067 18:54:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.067 [2024-06-10 18:54:47.717764] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:33.067 [2024-06-10 18:54:47.717823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593579 ] 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:33.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:33.067 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:33.326 [2024-06-10 18:54:47.837353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.326 [2024-06-10 18:54:47.925246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.890 18:54:48 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:33.890 18:54:48 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:10:33.890 18:54:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:33.890 18:54:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:33.890 18:54:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:33.890 18:54:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:33.890 18:54:48 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:33.890 18:54:48 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:33.890 18:54:48 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:33.891 18:54:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.148 ************************************ 00:10:34.148 START TEST accel_assign_opcode 00:10:34.148 ************************************ 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.148 [2024-06-10 18:54:48.659668] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.148 [2024-06-10 18:54:48.671681] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.148 18:54:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:34.405 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:34.405 software 00:10:34.405 00:10:34.405 real 0m0.282s 00:10:34.405 user 0m0.045s 00:10:34.405 sys 0m0.016s 00:10:34.405 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:34.405 18:54:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.406 ************************************ 00:10:34.406 END TEST accel_assign_opcode 00:10:34.406 ************************************ 00:10:34.406 18:54:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1593579 00:10:34.406 18:54:48 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1593579 ']' 00:10:34.406 18:54:48 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1593579 00:10:34.406 18:54:48 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:10:34.406 18:54:48 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:34.406 18:54:48 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1593579 00:10:34.406 18:54:49 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:34.406 18:54:49 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:34.406 18:54:49 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1593579' 00:10:34.406 killing process with pid 1593579 00:10:34.406 18:54:49 accel_rpc -- common/autotest_common.sh@968 -- # kill 1593579 00:10:34.406 18:54:49 accel_rpc -- common/autotest_common.sh@973 -- # wait 1593579 00:10:34.664 00:10:34.664 real 0m1.827s 00:10:34.664 user 0m1.856s 00:10:34.664 sys 0m0.602s 00:10:34.664 18:54:49 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:34.664 18:54:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.664 ************************************ 00:10:34.664 END TEST accel_rpc 00:10:34.664 ************************************ 00:10:34.664 18:54:49 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:34.664 18:54:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:34.664 18:54:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:34.664 18:54:49 -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 ************************************ 00:10:34.922 START TEST app_cmdline 00:10:34.922 ************************************ 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:34.922 * Looking for test storage... 00:10:34.922 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:34.922 18:54:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:34.922 18:54:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1594062 00:10:34.922 18:54:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1594062 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1594062 ']' 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.922 18:54:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:34.922 18:54:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 [2024-06-10 18:54:49.632174] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:34.922 [2024-06-10 18:54:49.632238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594062 ] 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:35.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.181 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:35.181 [2024-06-10 18:54:49.766398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.181 [2024-06-10 18:54:49.853055] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:36.114 { 00:10:36.114 "version": "SPDK v24.09-pre git sha1 5456a66b7", 00:10:36.114 "fields": { 00:10:36.114 "major": 24, 00:10:36.114 "minor": 9, 00:10:36.114 "patch": 0, 00:10:36.114 "suffix": "-pre", 00:10:36.114 "commit": "5456a66b7" 00:10:36.114 } 00:10:36.114 } 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:36.114 18:54:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:36.114 18:54:50 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:36.371 request: 00:10:36.371 { 00:10:36.371 "method": "env_dpdk_get_mem_stats", 00:10:36.371 "req_id": 1 00:10:36.371 } 00:10:36.371 Got JSON-RPC error response 00:10:36.371 response: 00:10:36.371 { 00:10:36.371 "code": -32601, 00:10:36.371 "message": "Method not found" 00:10:36.371 } 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:36.372 18:54:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1594062 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1594062 ']' 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1594062 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1594062 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1594062' 00:10:36.372 killing process with pid 1594062 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@968 -- # kill 1594062 00:10:36.372 18:54:51 app_cmdline -- common/autotest_common.sh@973 -- # wait 1594062 00:10:36.939 00:10:36.939 real 0m1.955s 00:10:36.939 user 0m2.333s 00:10:36.939 sys 0m0.599s 00:10:36.939 18:54:51 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:36.939 18:54:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:36.939 ************************************ 00:10:36.939 END TEST app_cmdline 00:10:36.939 ************************************ 00:10:36.939 18:54:51 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:10:36.939 18:54:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:36.939 18:54:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:36.939 18:54:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.939 ************************************ 00:10:36.939 START TEST version 00:10:36.939 ************************************ 00:10:36.939 18:54:51 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:10:36.939 * Looking for test storage... 00:10:36.939 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:36.939 18:54:51 version -- app/version.sh@17 -- # get_header_version major 00:10:36.939 18:54:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # cut -f2 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:36.939 18:54:51 version -- app/version.sh@17 -- # major=24 00:10:36.939 18:54:51 version -- app/version.sh@18 -- # get_header_version minor 00:10:36.939 18:54:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # cut -f2 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:36.939 18:54:51 version -- app/version.sh@18 -- # minor=9 00:10:36.939 18:54:51 version -- app/version.sh@19 -- # get_header_version patch 00:10:36.939 18:54:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # cut -f2 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:36.939 18:54:51 version -- app/version.sh@19 -- # patch=0 00:10:36.939 18:54:51 version -- app/version.sh@20 -- # get_header_version suffix 00:10:36.939 18:54:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # cut -f2 00:10:36.939 18:54:51 version -- app/version.sh@14 -- # tr -d '"' 00:10:36.939 18:54:51 version -- app/version.sh@20 -- # suffix=-pre 00:10:36.939 18:54:51 version -- app/version.sh@22 -- # version=24.9 00:10:36.939 18:54:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:36.939 18:54:51 version -- app/version.sh@28 -- # version=24.9rc0 00:10:36.939 18:54:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:10:36.939 18:54:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:36.939 18:54:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:36.939 18:54:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:36.939 00:10:36.939 real 0m0.186s 00:10:36.939 user 0m0.097s 00:10:36.939 sys 0m0.136s 00:10:36.939 18:54:51 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:36.939 18:54:51 version -- common/autotest_common.sh@10 -- # set +x 00:10:36.939 ************************************ 00:10:36.939 END TEST version 00:10:36.939 ************************************ 00:10:37.198 18:54:51 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:10:37.198 18:54:51 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:10:37.198 18:54:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:10:37.198 18:54:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:37.198 18:54:51 -- common/autotest_common.sh@10 -- # set +x 00:10:37.198 ************************************ 00:10:37.198 START TEST blockdev_general 00:10:37.198 ************************************ 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:10:37.198 * Looking for test storage... 00:10:37.198 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:37.198 18:54:51 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1594463 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:10:37.198 18:54:51 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 1594463 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@830 -- # '[' -z 1594463 ']' 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:37.198 18:54:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:37.198 [2024-06-10 18:54:51.946726] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:37.198 [2024-06-10 18:54:51.946789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594463 ] 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:37.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.456 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:37.456 [2024-06-10 18:54:52.072241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.456 [2024-06-10 18:54:52.161016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.388 18:54:52 blockdev_general -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:38.388 18:54:52 blockdev_general -- common/autotest_common.sh@863 -- # return 0 00:10:38.388 18:54:52 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:10:38.388 18:54:52 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:10:38.388 18:54:52 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:10:38.388 18:54:52 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.388 18:54:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.388 [2024-06-10 18:54:53.064818] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:38.388 [2024-06-10 18:54:53.064872] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:38.388 00:10:38.388 [2024-06-10 18:54:53.072799] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:38.388 [2024-06-10 18:54:53.072822] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:38.388 00:10:38.388 Malloc0 00:10:38.388 Malloc1 00:10:38.388 Malloc2 00:10:38.388 Malloc3 00:10:38.388 Malloc4 00:10:38.646 Malloc5 00:10:38.646 Malloc6 00:10:38.646 Malloc7 00:10:38.646 Malloc8 00:10:38.646 Malloc9 00:10:38.646 [2024-06-10 18:54:53.207050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:38.646 [2024-06-10 18:54:53.207096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.646 [2024-06-10 18:54:53.207114] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18569f0 00:10:38.646 [2024-06-10 18:54:53.207125] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.646 [2024-06-10 18:54:53.208377] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.646 [2024-06-10 18:54:53.208405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:38.646 TestPT 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:10:38.646 5000+0 records in 00:10:38.646 5000+0 records out 00:10:38.646 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0278571 s, 368 MB/s 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 AIO0 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.646 18:54:53 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.646 18:54:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.905 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.905 18:54:53 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:10:38.905 18:54:53 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:10:38.905 18:54:53 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:10:38.905 18:54:53 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:38.905 18:54:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.905 18:54:53 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:38.905 18:54:53 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:10:38.905 18:54:53 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:10:38.906 18:54:53 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c81d8ef-daed-49ea-8f3c-2e6cf243c46d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c81d8ef-daed-49ea-8f3c-2e6cf243c46d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "28ec116a-448c-5da4-9314-7e441760cf4b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "28ec116a-448c-5da4-9314-7e441760cf4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "231d4d40-5d15-5a76-976c-464cf999810b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "231d4d40-5d15-5a76-976c-464cf999810b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "7dbb2905-1cb4-5248-93a3-4b73b1cb0e32"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dbb2905-1cb4-5248-93a3-4b73b1cb0e32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "98cf50d9-260e-5ee0-8afa-de99558233b1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "98cf50d9-260e-5ee0-8afa-de99558233b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "04e177e8-52c7-5193-aaf1-4a70cfcd8e63"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "04e177e8-52c7-5193-aaf1-4a70cfcd8e63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "18442cd2-c439-5e8e-adfa-5ae74687fb84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "18442cd2-c439-5e8e-adfa-5ae74687fb84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "21890405-61e5-5d18-b07b-878124ab58ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21890405-61e5-5d18-b07b-878124ab58ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dce0d170-794c-5f55-b279-12b711a32485"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dce0d170-794c-5f55-b279-12b711a32485",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9af559cb-0c9e-5cb3-b3b0-3c94d0cb4577"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9af559cb-0c9e-5cb3-b3b0-3c94d0cb4577",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "be5ce2d8-4443-5154-b4c3-15393943c642"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "be5ce2d8-4443-5154-b4c3-15393943c642",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "14b983dc-5823-56a6-acee-fcae92a3e529"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "14b983dc-5823-56a6-acee-fcae92a3e529",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "28f2f200-7ca8-4385-973a-d3de6428f680"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "28f2f200-7ca8-4385-973a-d3de6428f680",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "28f2f200-7ca8-4385-973a-d3de6428f680",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0a1d8d02-78bf-4af5-bf9c-0a2fa1ee4c68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f285b4a8-e823-47cd-b938-d7356ac166bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "07f5941b-3e7e-465e-afdf-ba3102fcdcf5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1acf5194-94e7-4e77-b724-d7ea9c16daa1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3a1e907b-6c57-4f17-932b-f68d198bc69d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c57c9bb7-d4cc-4268-b783-877505bca13d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "7bcfc7bf-c8fa-4550-9e3e-1809694b1637"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "7bcfc7bf-c8fa-4550-9e3e-1809694b1637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:38.906 18:54:53 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:10:38.906 18:54:53 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:10:38.906 18:54:53 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:10:38.906 18:54:53 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 1594463 00:10:38.906 18:54:53 blockdev_general -- common/autotest_common.sh@949 -- # '[' -z 1594463 ']' 00:10:38.906 18:54:53 blockdev_general -- common/autotest_common.sh@953 -- # kill -0 1594463 00:10:38.906 18:54:53 blockdev_general -- common/autotest_common.sh@954 -- # uname 00:10:38.906 18:54:53 blockdev_general -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:38.906 18:54:53 blockdev_general -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1594463 00:10:39.165 18:54:53 blockdev_general -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:39.165 18:54:53 blockdev_general -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:39.165 18:54:53 blockdev_general -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1594463' 00:10:39.165 killing process with pid 1594463 00:10:39.165 18:54:53 blockdev_general -- common/autotest_common.sh@968 -- # kill 1594463 00:10:39.165 18:54:53 blockdev_general -- common/autotest_common.sh@973 -- # wait 1594463 00:10:39.423 18:54:54 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:39.423 18:54:54 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:39.423 18:54:54 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:10:39.423 18:54:54 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:39.423 18:54:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:39.423 ************************************ 00:10:39.423 START TEST bdev_hello_world 00:10:39.423 ************************************ 00:10:39.423 18:54:54 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:39.681 [2024-06-10 18:54:54.201382] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:39.681 [2024-06-10 18:54:54.201436] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595007 ] 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.681 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:39.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:39.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.682 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:39.682 [2024-06-10 18:54:54.333323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.682 [2024-06-10 18:54:54.416773] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.939 [2024-06-10 18:54:54.563433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:39.939 [2024-06-10 18:54:54.563490] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:39.939 [2024-06-10 18:54:54.563503] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:39.939 [2024-06-10 18:54:54.571440] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:39.939 [2024-06-10 18:54:54.571465] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:39.939 [2024-06-10 18:54:54.579453] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:39.939 [2024-06-10 18:54:54.579475] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:39.939 [2024-06-10 18:54:54.650742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:39.940 [2024-06-10 18:54:54.650791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.940 [2024-06-10 18:54:54.650806] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202fff0 00:10:39.940 [2024-06-10 18:54:54.650818] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.940 [2024-06-10 18:54:54.652078] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.940 [2024-06-10 18:54:54.652120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:40.197 [2024-06-10 18:54:54.792032] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:40.197 [2024-06-10 18:54:54.792102] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:10:40.197 [2024-06-10 18:54:54.792166] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:40.197 [2024-06-10 18:54:54.792243] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:40.197 [2024-06-10 18:54:54.792317] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:40.197 [2024-06-10 18:54:54.792347] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:40.197 [2024-06-10 18:54:54.792410] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:40.197 00:10:40.197 [2024-06-10 18:54:54.792451] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:40.455 00:10:40.455 real 0m0.919s 00:10:40.455 user 0m0.596s 00:10:40.455 sys 0m0.287s 00:10:40.455 18:54:55 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:40.455 18:54:55 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:40.455 ************************************ 00:10:40.455 END TEST bdev_hello_world 00:10:40.455 ************************************ 00:10:40.455 18:54:55 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:10:40.455 18:54:55 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:40.455 18:54:55 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:40.455 18:54:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:40.455 ************************************ 00:10:40.455 START TEST bdev_bounds 00:10:40.455 ************************************ 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1595050 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1595050' 00:10:40.455 Process bdevio pid: 1595050 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1595050 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 1595050 ']' 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:40.455 18:54:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:40.455 [2024-06-10 18:54:55.196071] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:40.455 [2024-06-10 18:54:55.196125] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595050 ] 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:40.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:40.713 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:40.713 [2024-06-10 18:54:55.329726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.713 [2024-06-10 18:54:55.418585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.713 [2024-06-10 18:54:55.418671] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.713 [2024-06-10 18:54:55.418675] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.972 [2024-06-10 18:54:55.570833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:40.972 [2024-06-10 18:54:55.570889] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:40.972 [2024-06-10 18:54:55.570902] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:40.972 [2024-06-10 18:54:55.578845] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:40.972 [2024-06-10 18:54:55.578870] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:40.972 [2024-06-10 18:54:55.586859] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:40.972 [2024-06-10 18:54:55.586881] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:40.972 [2024-06-10 18:54:55.658373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:40.972 [2024-06-10 18:54:55.658423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.972 [2024-06-10 18:54:55.658444] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf72260 00:10:40.972 [2024-06-10 18:54:55.658456] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.972 [2024-06-10 18:54:55.659760] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.972 [2024-06-10 18:54:55.659787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:41.555 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:41.555 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:10:41.555 18:54:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:41.555 I/O targets: 00:10:41.555 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:10:41.555 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:10:41.555 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:10:41.555 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:10:41.555 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:10:41.555 raid0: 131072 blocks of 512 bytes (64 MiB) 00:10:41.555 concat0: 131072 blocks of 512 bytes (64 MiB) 00:10:41.555 raid1: 65536 blocks of 512 bytes (32 MiB) 00:10:41.555 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:10:41.555 00:10:41.555 00:10:41.555 CUnit - A unit testing framework for C - Version 2.1-3 00:10:41.555 http://cunit.sourceforge.net/ 00:10:41.555 00:10:41.555 00:10:41.555 Suite: bdevio tests on: AIO0 00:10:41.555 Test: blockdev write read block ...passed 00:10:41.555 Test: blockdev write zeroes read block ...passed 00:10:41.555 Test: blockdev write zeroes read no split ...passed 00:10:41.555 Test: blockdev write zeroes read split ...passed 00:10:41.555 Test: blockdev write zeroes read split partial ...passed 00:10:41.555 Test: blockdev reset ...passed 00:10:41.555 Test: blockdev write read 8 blocks ...passed 00:10:41.555 Test: blockdev write read size > 128k ...passed 00:10:41.555 Test: blockdev write read invalid size ...passed 00:10:41.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.555 Test: blockdev write read max offset ...passed 00:10:41.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.555 Test: blockdev writev readv 8 blocks ...passed 00:10:41.555 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.555 Test: blockdev writev readv block ...passed 00:10:41.555 Test: blockdev writev readv size > 128k ...passed 00:10:41.555 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.555 Test: blockdev comparev and writev ...passed 00:10:41.555 Test: blockdev nvme passthru rw ...passed 00:10:41.555 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.555 Test: blockdev nvme admin passthru ...passed 00:10:41.555 Test: blockdev copy ...passed 00:10:41.555 Suite: bdevio tests on: raid1 00:10:41.555 Test: blockdev write read block ...passed 00:10:41.555 Test: blockdev write zeroes read block ...passed 00:10:41.555 Test: blockdev write zeroes read no split ...passed 00:10:41.555 Test: blockdev write zeroes read split ...passed 00:10:41.555 Test: blockdev write zeroes read split partial ...passed 00:10:41.555 Test: blockdev reset ...passed 00:10:41.555 Test: blockdev write read 8 blocks ...passed 00:10:41.555 Test: blockdev write read size > 128k ...passed 00:10:41.555 Test: blockdev write read invalid size ...passed 00:10:41.555 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.555 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.555 Test: blockdev write read max offset ...passed 00:10:41.555 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.555 Test: blockdev writev readv 8 blocks ...passed 00:10:41.555 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.556 Test: blockdev writev readv block ...passed 00:10:41.556 Test: blockdev writev readv size > 128k ...passed 00:10:41.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.556 Test: blockdev comparev and writev ...passed 00:10:41.556 Test: blockdev nvme passthru rw ...passed 00:10:41.556 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.556 Test: blockdev nvme admin passthru ...passed 00:10:41.556 Test: blockdev copy ...passed 00:10:41.556 Suite: bdevio tests on: concat0 00:10:41.556 Test: blockdev write read block ...passed 00:10:41.556 Test: blockdev write zeroes read block ...passed 00:10:41.556 Test: blockdev write zeroes read no split ...passed 00:10:41.556 Test: blockdev write zeroes read split ...passed 00:10:41.556 Test: blockdev write zeroes read split partial ...passed 00:10:41.556 Test: blockdev reset ...passed 00:10:41.556 Test: blockdev write read 8 blocks ...passed 00:10:41.556 Test: blockdev write read size > 128k ...passed 00:10:41.556 Test: blockdev write read invalid size ...passed 00:10:41.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.556 Test: blockdev write read max offset ...passed 00:10:41.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.556 Test: blockdev writev readv 8 blocks ...passed 00:10:41.556 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.556 Test: blockdev writev readv block ...passed 00:10:41.556 Test: blockdev writev readv size > 128k ...passed 00:10:41.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.556 Test: blockdev comparev and writev ...passed 00:10:41.556 Test: blockdev nvme passthru rw ...passed 00:10:41.556 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.556 Test: blockdev nvme admin passthru ...passed 00:10:41.556 Test: blockdev copy ...passed 00:10:41.556 Suite: bdevio tests on: raid0 00:10:41.556 Test: blockdev write read block ...passed 00:10:41.556 Test: blockdev write zeroes read block ...passed 00:10:41.556 Test: blockdev write zeroes read no split ...passed 00:10:41.556 Test: blockdev write zeroes read split ...passed 00:10:41.556 Test: blockdev write zeroes read split partial ...passed 00:10:41.556 Test: blockdev reset ...passed 00:10:41.556 Test: blockdev write read 8 blocks ...passed 00:10:41.556 Test: blockdev write read size > 128k ...passed 00:10:41.556 Test: blockdev write read invalid size ...passed 00:10:41.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.556 Test: blockdev write read max offset ...passed 00:10:41.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.556 Test: blockdev writev readv 8 blocks ...passed 00:10:41.556 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.556 Test: blockdev writev readv block ...passed 00:10:41.556 Test: blockdev writev readv size > 128k ...passed 00:10:41.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.556 Test: blockdev comparev and writev ...passed 00:10:41.556 Test: blockdev nvme passthru rw ...passed 00:10:41.556 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.556 Test: blockdev nvme admin passthru ...passed 00:10:41.556 Test: blockdev copy ...passed 00:10:41.556 Suite: bdevio tests on: TestPT 00:10:41.556 Test: blockdev write read block ...passed 00:10:41.556 Test: blockdev write zeroes read block ...passed 00:10:41.556 Test: blockdev write zeroes read no split ...passed 00:10:41.556 Test: blockdev write zeroes read split ...passed 00:10:41.556 Test: blockdev write zeroes read split partial ...passed 00:10:41.556 Test: blockdev reset ...passed 00:10:41.556 Test: blockdev write read 8 blocks ...passed 00:10:41.556 Test: blockdev write read size > 128k ...passed 00:10:41.556 Test: blockdev write read invalid size ...passed 00:10:41.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.556 Test: blockdev write read max offset ...passed 00:10:41.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.556 Test: blockdev writev readv 8 blocks ...passed 00:10:41.556 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.556 Test: blockdev writev readv block ...passed 00:10:41.556 Test: blockdev writev readv size > 128k ...passed 00:10:41.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.556 Test: blockdev comparev and writev ...passed 00:10:41.556 Test: blockdev nvme passthru rw ...passed 00:10:41.556 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.556 Test: blockdev nvme admin passthru ...passed 00:10:41.556 Test: blockdev copy ...passed 00:10:41.556 Suite: bdevio tests on: Malloc2p7 00:10:41.556 Test: blockdev write read block ...passed 00:10:41.556 Test: blockdev write zeroes read block ...passed 00:10:41.556 Test: blockdev write zeroes read no split ...passed 00:10:41.817 Test: blockdev write zeroes read split ...passed 00:10:41.817 Test: blockdev write zeroes read split partial ...passed 00:10:41.817 Test: blockdev reset ...passed 00:10:41.817 Test: blockdev write read 8 blocks ...passed 00:10:41.817 Test: blockdev write read size > 128k ...passed 00:10:41.817 Test: blockdev write read invalid size ...passed 00:10:41.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.817 Test: blockdev write read max offset ...passed 00:10:41.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.817 Test: blockdev writev readv 8 blocks ...passed 00:10:41.817 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.817 Test: blockdev writev readv block ...passed 00:10:41.817 Test: blockdev writev readv size > 128k ...passed 00:10:41.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.817 Test: blockdev comparev and writev ...passed 00:10:41.817 Test: blockdev nvme passthru rw ...passed 00:10:41.817 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.817 Test: blockdev nvme admin passthru ...passed 00:10:41.817 Test: blockdev copy ...passed 00:10:41.817 Suite: bdevio tests on: Malloc2p6 00:10:41.817 Test: blockdev write read block ...passed 00:10:41.817 Test: blockdev write zeroes read block ...passed 00:10:41.817 Test: blockdev write zeroes read no split ...passed 00:10:41.817 Test: blockdev write zeroes read split ...passed 00:10:41.817 Test: blockdev write zeroes read split partial ...passed 00:10:41.817 Test: blockdev reset ...passed 00:10:41.817 Test: blockdev write read 8 blocks ...passed 00:10:41.817 Test: blockdev write read size > 128k ...passed 00:10:41.817 Test: blockdev write read invalid size ...passed 00:10:41.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.817 Test: blockdev write read max offset ...passed 00:10:41.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.817 Test: blockdev writev readv 8 blocks ...passed 00:10:41.817 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.817 Test: blockdev writev readv block ...passed 00:10:41.817 Test: blockdev writev readv size > 128k ...passed 00:10:41.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.817 Test: blockdev comparev and writev ...passed 00:10:41.817 Test: blockdev nvme passthru rw ...passed 00:10:41.817 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.817 Test: blockdev nvme admin passthru ...passed 00:10:41.817 Test: blockdev copy ...passed 00:10:41.817 Suite: bdevio tests on: Malloc2p5 00:10:41.817 Test: blockdev write read block ...passed 00:10:41.817 Test: blockdev write zeroes read block ...passed 00:10:41.817 Test: blockdev write zeroes read no split ...passed 00:10:41.817 Test: blockdev write zeroes read split ...passed 00:10:41.817 Test: blockdev write zeroes read split partial ...passed 00:10:41.817 Test: blockdev reset ...passed 00:10:41.817 Test: blockdev write read 8 blocks ...passed 00:10:41.817 Test: blockdev write read size > 128k ...passed 00:10:41.817 Test: blockdev write read invalid size ...passed 00:10:41.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.817 Test: blockdev write read max offset ...passed 00:10:41.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.817 Test: blockdev writev readv 8 blocks ...passed 00:10:41.817 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.817 Test: blockdev writev readv block ...passed 00:10:41.817 Test: blockdev writev readv size > 128k ...passed 00:10:41.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.817 Test: blockdev comparev and writev ...passed 00:10:41.817 Test: blockdev nvme passthru rw ...passed 00:10:41.817 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.817 Test: blockdev nvme admin passthru ...passed 00:10:41.817 Test: blockdev copy ...passed 00:10:41.817 Suite: bdevio tests on: Malloc2p4 00:10:41.817 Test: blockdev write read block ...passed 00:10:41.817 Test: blockdev write zeroes read block ...passed 00:10:41.817 Test: blockdev write zeroes read no split ...passed 00:10:41.817 Test: blockdev write zeroes read split ...passed 00:10:41.817 Test: blockdev write zeroes read split partial ...passed 00:10:41.817 Test: blockdev reset ...passed 00:10:41.817 Test: blockdev write read 8 blocks ...passed 00:10:41.817 Test: blockdev write read size > 128k ...passed 00:10:41.817 Test: blockdev write read invalid size ...passed 00:10:41.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.817 Test: blockdev write read max offset ...passed 00:10:41.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.817 Test: blockdev writev readv 8 blocks ...passed 00:10:41.817 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.817 Test: blockdev writev readv block ...passed 00:10:41.817 Test: blockdev writev readv size > 128k ...passed 00:10:41.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.817 Test: blockdev comparev and writev ...passed 00:10:41.817 Test: blockdev nvme passthru rw ...passed 00:10:41.817 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.817 Test: blockdev nvme admin passthru ...passed 00:10:41.817 Test: blockdev copy ...passed 00:10:41.817 Suite: bdevio tests on: Malloc2p3 00:10:41.817 Test: blockdev write read block ...passed 00:10:41.817 Test: blockdev write zeroes read block ...passed 00:10:41.817 Test: blockdev write zeroes read no split ...passed 00:10:41.817 Test: blockdev write zeroes read split ...passed 00:10:41.817 Test: blockdev write zeroes read split partial ...passed 00:10:41.817 Test: blockdev reset ...passed 00:10:41.817 Test: blockdev write read 8 blocks ...passed 00:10:41.817 Test: blockdev write read size > 128k ...passed 00:10:41.817 Test: blockdev write read invalid size ...passed 00:10:41.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.817 Test: blockdev write read max offset ...passed 00:10:41.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.817 Test: blockdev writev readv 8 blocks ...passed 00:10:41.817 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.817 Test: blockdev writev readv block ...passed 00:10:41.817 Test: blockdev writev readv size > 128k ...passed 00:10:41.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.817 Test: blockdev comparev and writev ...passed 00:10:41.817 Test: blockdev nvme passthru rw ...passed 00:10:41.817 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.817 Test: blockdev nvme admin passthru ...passed 00:10:41.817 Test: blockdev copy ...passed 00:10:41.817 Suite: bdevio tests on: Malloc2p2 00:10:41.817 Test: blockdev write read block ...passed 00:10:41.817 Test: blockdev write zeroes read block ...passed 00:10:41.817 Test: blockdev write zeroes read no split ...passed 00:10:41.817 Test: blockdev write zeroes read split ...passed 00:10:41.817 Test: blockdev write zeroes read split partial ...passed 00:10:41.817 Test: blockdev reset ...passed 00:10:41.817 Test: blockdev write read 8 blocks ...passed 00:10:41.817 Test: blockdev write read size > 128k ...passed 00:10:41.817 Test: blockdev write read invalid size ...passed 00:10:41.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.817 Test: blockdev write read max offset ...passed 00:10:41.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.818 Test: blockdev writev readv 8 blocks ...passed 00:10:41.818 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.818 Test: blockdev writev readv block ...passed 00:10:41.818 Test: blockdev writev readv size > 128k ...passed 00:10:41.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.818 Test: blockdev comparev and writev ...passed 00:10:41.818 Test: blockdev nvme passthru rw ...passed 00:10:41.818 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.818 Test: blockdev nvme admin passthru ...passed 00:10:41.818 Test: blockdev copy ...passed 00:10:41.818 Suite: bdevio tests on: Malloc2p1 00:10:41.818 Test: blockdev write read block ...passed 00:10:41.818 Test: blockdev write zeroes read block ...passed 00:10:41.818 Test: blockdev write zeroes read no split ...passed 00:10:41.818 Test: blockdev write zeroes read split ...passed 00:10:41.818 Test: blockdev write zeroes read split partial ...passed 00:10:41.818 Test: blockdev reset ...passed 00:10:41.818 Test: blockdev write read 8 blocks ...passed 00:10:41.818 Test: blockdev write read size > 128k ...passed 00:10:41.818 Test: blockdev write read invalid size ...passed 00:10:41.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.818 Test: blockdev write read max offset ...passed 00:10:41.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.818 Test: blockdev writev readv 8 blocks ...passed 00:10:41.818 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.818 Test: blockdev writev readv block ...passed 00:10:41.818 Test: blockdev writev readv size > 128k ...passed 00:10:41.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.818 Test: blockdev comparev and writev ...passed 00:10:41.818 Test: blockdev nvme passthru rw ...passed 00:10:41.818 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.818 Test: blockdev nvme admin passthru ...passed 00:10:41.818 Test: blockdev copy ...passed 00:10:41.818 Suite: bdevio tests on: Malloc2p0 00:10:41.818 Test: blockdev write read block ...passed 00:10:41.818 Test: blockdev write zeroes read block ...passed 00:10:41.818 Test: blockdev write zeroes read no split ...passed 00:10:41.818 Test: blockdev write zeroes read split ...passed 00:10:41.818 Test: blockdev write zeroes read split partial ...passed 00:10:41.818 Test: blockdev reset ...passed 00:10:41.818 Test: blockdev write read 8 blocks ...passed 00:10:41.818 Test: blockdev write read size > 128k ...passed 00:10:41.818 Test: blockdev write read invalid size ...passed 00:10:41.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.818 Test: blockdev write read max offset ...passed 00:10:41.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.818 Test: blockdev writev readv 8 blocks ...passed 00:10:41.818 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.818 Test: blockdev writev readv block ...passed 00:10:41.818 Test: blockdev writev readv size > 128k ...passed 00:10:41.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.818 Test: blockdev comparev and writev ...passed 00:10:41.818 Test: blockdev nvme passthru rw ...passed 00:10:41.818 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.818 Test: blockdev nvme admin passthru ...passed 00:10:41.818 Test: blockdev copy ...passed 00:10:41.818 Suite: bdevio tests on: Malloc1p1 00:10:41.818 Test: blockdev write read block ...passed 00:10:41.818 Test: blockdev write zeroes read block ...passed 00:10:41.818 Test: blockdev write zeroes read no split ...passed 00:10:41.818 Test: blockdev write zeroes read split ...passed 00:10:41.818 Test: blockdev write zeroes read split partial ...passed 00:10:41.818 Test: blockdev reset ...passed 00:10:41.818 Test: blockdev write read 8 blocks ...passed 00:10:41.818 Test: blockdev write read size > 128k ...passed 00:10:41.818 Test: blockdev write read invalid size ...passed 00:10:41.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.818 Test: blockdev write read max offset ...passed 00:10:41.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.818 Test: blockdev writev readv 8 blocks ...passed 00:10:41.818 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.818 Test: blockdev writev readv block ...passed 00:10:41.818 Test: blockdev writev readv size > 128k ...passed 00:10:41.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.818 Test: blockdev comparev and writev ...passed 00:10:41.818 Test: blockdev nvme passthru rw ...passed 00:10:41.818 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.818 Test: blockdev nvme admin passthru ...passed 00:10:41.818 Test: blockdev copy ...passed 00:10:41.818 Suite: bdevio tests on: Malloc1p0 00:10:41.818 Test: blockdev write read block ...passed 00:10:41.818 Test: blockdev write zeroes read block ...passed 00:10:41.818 Test: blockdev write zeroes read no split ...passed 00:10:41.818 Test: blockdev write zeroes read split ...passed 00:10:41.818 Test: blockdev write zeroes read split partial ...passed 00:10:41.818 Test: blockdev reset ...passed 00:10:41.818 Test: blockdev write read 8 blocks ...passed 00:10:41.818 Test: blockdev write read size > 128k ...passed 00:10:41.818 Test: blockdev write read invalid size ...passed 00:10:41.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.818 Test: blockdev write read max offset ...passed 00:10:41.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.818 Test: blockdev writev readv 8 blocks ...passed 00:10:41.818 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.818 Test: blockdev writev readv block ...passed 00:10:41.818 Test: blockdev writev readv size > 128k ...passed 00:10:41.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.818 Test: blockdev comparev and writev ...passed 00:10:41.818 Test: blockdev nvme passthru rw ...passed 00:10:41.818 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.818 Test: blockdev nvme admin passthru ...passed 00:10:41.818 Test: blockdev copy ...passed 00:10:41.818 Suite: bdevio tests on: Malloc0 00:10:41.818 Test: blockdev write read block ...passed 00:10:41.818 Test: blockdev write zeroes read block ...passed 00:10:41.818 Test: blockdev write zeroes read no split ...passed 00:10:41.818 Test: blockdev write zeroes read split ...passed 00:10:41.818 Test: blockdev write zeroes read split partial ...passed 00:10:41.818 Test: blockdev reset ...passed 00:10:41.818 Test: blockdev write read 8 blocks ...passed 00:10:41.818 Test: blockdev write read size > 128k ...passed 00:10:41.818 Test: blockdev write read invalid size ...passed 00:10:41.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.818 Test: blockdev write read max offset ...passed 00:10:41.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.818 Test: blockdev writev readv 8 blocks ...passed 00:10:41.818 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.818 Test: blockdev writev readv block ...passed 00:10:41.818 Test: blockdev writev readv size > 128k ...passed 00:10:41.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.818 Test: blockdev comparev and writev ...passed 00:10:41.818 Test: blockdev nvme passthru rw ...passed 00:10:41.818 Test: blockdev nvme passthru vendor specific ...passed 00:10:41.818 Test: blockdev nvme admin passthru ...passed 00:10:41.818 Test: blockdev copy ...passed 00:10:41.818 00:10:41.818 Run Summary: Type Total Ran Passed Failed Inactive 00:10:41.818 suites 16 16 n/a 0 0 00:10:41.818 tests 368 368 368 0 0 00:10:41.818 asserts 2224 2224 2224 0 n/a 00:10:41.818 00:10:41.818 Elapsed time = 0.485 seconds 00:10:41.818 0 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1595050 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 1595050 ']' 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 1595050 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1595050 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1595050' 00:10:41.818 killing process with pid 1595050 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # kill 1595050 00:10:41.818 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # wait 1595050 00:10:42.076 18:54:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:10:42.076 00:10:42.076 real 0m1.606s 00:10:42.076 user 0m4.001s 00:10:42.076 sys 0m0.452s 00:10:42.076 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:42.076 18:54:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:42.076 ************************************ 00:10:42.076 END TEST bdev_bounds 00:10:42.076 ************************************ 00:10:42.076 18:54:56 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:42.076 18:54:56 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:10:42.076 18:54:56 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:42.076 18:54:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:42.076 ************************************ 00:10:42.076 START TEST bdev_nbd 00:10:42.076 ************************************ 00:10:42.076 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:42.076 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1595405 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1595405 /var/tmp/spdk-nbd.sock 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 1595405 ']' 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:42.334 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:42.335 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:42.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:42.335 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:42.335 18:54:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:42.335 [2024-06-10 18:54:56.897801] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:10:42.335 [2024-06-10 18:54:56.897857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.0 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.1 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.2 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.3 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.4 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.5 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.6 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:01.7 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.0 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.1 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.2 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.3 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.4 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.5 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.6 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b6:02.7 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.0 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.1 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.2 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.3 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.4 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.5 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.6 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:01.7 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.0 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.1 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.2 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.3 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.4 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.5 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.6 cannot be used 00:10:42.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.335 EAL: Requested device 0000:b8:02.7 cannot be used 00:10:42.335 [2024-06-10 18:54:57.032992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.593 [2024-06-10 18:54:57.121315] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.593 [2024-06-10 18:54:57.269233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:42.593 [2024-06-10 18:54:57.269290] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:42.593 [2024-06-10 18:54:57.269303] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:42.593 [2024-06-10 18:54:57.277244] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:42.593 [2024-06-10 18:54:57.277268] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:42.593 [2024-06-10 18:54:57.285257] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:42.593 [2024-06-10 18:54:57.285279] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:42.850 [2024-06-10 18:54:57.356493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:42.850 [2024-06-10 18:54:57.356538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.850 [2024-06-10 18:54:57.356553] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x279caa0 00:10:42.850 [2024-06-10 18:54:57.356569] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.850 [2024-06-10 18:54:57.357846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.850 [2024-06-10 18:54:57.357873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:43.108 18:54:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.366 1+0 records in 00:10:43.366 1+0 records out 00:10:43.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026209 s, 15.6 MB/s 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:43.366 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:10:43.624 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:43.624 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.625 1+0 records in 00:10:43.625 1+0 records out 00:10:43.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290062 s, 14.1 MB/s 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:43.625 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.883 1+0 records in 00:10:43.883 1+0 records out 00:10:43.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285875 s, 14.3 MB/s 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:43.883 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.141 1+0 records in 00:10:44.141 1+0 records out 00:10:44.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305719 s, 13.4 MB/s 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:44.141 18:54:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.399 1+0 records in 00:10:44.399 1+0 records out 00:10:44.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292177 s, 14.0 MB/s 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:44.399 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.656 1+0 records in 00:10:44.656 1+0 records out 00:10:44.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355709 s, 11.5 MB/s 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:44.656 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:10:44.914 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:44.914 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:44.914 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:44.914 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd6 00:10:44.914 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd6 /proc/partitions 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.915 1+0 records in 00:10:44.915 1+0 records out 00:10:44.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397868 s, 10.3 MB/s 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:44.915 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd7 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd7 /proc/partitions 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.173 1+0 records in 00:10:45.173 1+0 records out 00:10:45.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450263 s, 9.1 MB/s 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:45.173 18:54:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd8 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd8 /proc/partitions 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.432 1+0 records in 00:10:45.432 1+0 records out 00:10:45.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411224 s, 10.0 MB/s 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:45.432 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd9 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd9 /proc/partitions 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.690 1+0 records in 00:10:45.690 1+0 records out 00:10:45.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552853 s, 7.4 MB/s 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:45.690 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:45.948 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:45.948 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:45.948 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:45.948 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:45.948 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.206 1+0 records in 00:10:46.206 1+0 records out 00:10:46.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523884 s, 7.8 MB/s 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:46.206 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.465 1+0 records in 00:10:46.465 1+0 records out 00:10:46.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561164 s, 7.3 MB/s 00:10:46.465 18:55:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:46.465 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.723 1+0 records in 00:10:46.723 1+0 records out 00:10:46.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705968 s, 5.8 MB/s 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:46.723 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.981 1+0 records in 00:10:46.981 1+0 records out 00:10:46.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605937 s, 6.8 MB/s 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:46.981 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd14 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd14 /proc/partitions 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:47.239 1+0 records in 00:10:47.239 1+0 records out 00:10:47.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070659 s, 5.8 MB/s 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:47.239 18:55:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd15 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd15 /proc/partitions 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:47.498 1+0 records in 00:10:47.498 1+0 records out 00:10:47.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081167 s, 5.0 MB/s 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:47.498 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd0", 00:10:47.757 "bdev_name": "Malloc0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd1", 00:10:47.757 "bdev_name": "Malloc1p0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd2", 00:10:47.757 "bdev_name": "Malloc1p1" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd3", 00:10:47.757 "bdev_name": "Malloc2p0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd4", 00:10:47.757 "bdev_name": "Malloc2p1" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd5", 00:10:47.757 "bdev_name": "Malloc2p2" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd6", 00:10:47.757 "bdev_name": "Malloc2p3" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd7", 00:10:47.757 "bdev_name": "Malloc2p4" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd8", 00:10:47.757 "bdev_name": "Malloc2p5" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd9", 00:10:47.757 "bdev_name": "Malloc2p6" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd10", 00:10:47.757 "bdev_name": "Malloc2p7" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd11", 00:10:47.757 "bdev_name": "TestPT" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd12", 00:10:47.757 "bdev_name": "raid0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd13", 00:10:47.757 "bdev_name": "concat0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd14", 00:10:47.757 "bdev_name": "raid1" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd15", 00:10:47.757 "bdev_name": "AIO0" 00:10:47.757 } 00:10:47.757 ]' 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd0", 00:10:47.757 "bdev_name": "Malloc0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd1", 00:10:47.757 "bdev_name": "Malloc1p0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd2", 00:10:47.757 "bdev_name": "Malloc1p1" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd3", 00:10:47.757 "bdev_name": "Malloc2p0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd4", 00:10:47.757 "bdev_name": "Malloc2p1" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd5", 00:10:47.757 "bdev_name": "Malloc2p2" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd6", 00:10:47.757 "bdev_name": "Malloc2p3" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd7", 00:10:47.757 "bdev_name": "Malloc2p4" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd8", 00:10:47.757 "bdev_name": "Malloc2p5" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd9", 00:10:47.757 "bdev_name": "Malloc2p6" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd10", 00:10:47.757 "bdev_name": "Malloc2p7" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd11", 00:10:47.757 "bdev_name": "TestPT" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd12", 00:10:47.757 "bdev_name": "raid0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd13", 00:10:47.757 "bdev_name": "concat0" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd14", 00:10:47.757 "bdev_name": "raid1" 00:10:47.757 }, 00:10:47.757 { 00:10:47.757 "nbd_device": "/dev/nbd15", 00:10:47.757 "bdev_name": "AIO0" 00:10:47.757 } 00:10:47.757 ]' 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.757 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.015 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.273 18:55:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.531 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.789 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.048 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.306 18:55:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:49.306 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.563 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.820 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:10:50.077 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:50.078 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.078 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.078 18:55:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.335 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.592 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.850 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.107 18:55:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.365 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.623 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.880 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:51.881 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:52.138 /dev/nbd0 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:52.138 1+0 records in 00:10:52.138 1+0 records out 00:10:52.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254948 s, 16.1 MB/s 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:52.138 18:55:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:10:52.395 /dev/nbd1 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:52.395 1+0 records in 00:10:52.395 1+0 records out 00:10:52.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262699 s, 15.6 MB/s 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:52.395 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:52.396 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:52.396 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:52.396 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:52.396 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:52.396 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:10:52.653 /dev/nbd10 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:52.653 1+0 records in 00:10:52.653 1+0 records out 00:10:52.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289071 s, 14.2 MB/s 00:10:52.653 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:10:52.910 /dev/nbd11 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:52.910 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.167 1+0 records in 00:10:53.167 1+0 records out 00:10:53.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305157 s, 13.4 MB/s 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:10:53.167 /dev/nbd12 00:10:53.167 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd12 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd12 /proc/partitions 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.424 1+0 records in 00:10:53.424 1+0 records out 00:10:53.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253339 s, 16.2 MB/s 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:53.424 18:55:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:10:53.424 /dev/nbd13 00:10:53.424 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:53.424 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd13 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd13 /proc/partitions 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.681 1+0 records in 00:10:53.681 1+0 records out 00:10:53.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452066 s, 9.1 MB/s 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:53.681 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:10:53.681 /dev/nbd14 00:10:53.938 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd14 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd14 /proc/partitions 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.939 1+0 records in 00:10:53.939 1+0 records out 00:10:53.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392257 s, 10.4 MB/s 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:53.939 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:10:53.939 /dev/nbd15 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd15 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd15 /proc/partitions 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.197 1+0 records in 00:10:54.197 1+0 records out 00:10:54.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449718 s, 9.1 MB/s 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:54.197 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:10:54.197 /dev/nbd2 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.455 1+0 records in 00:10:54.455 1+0 records out 00:10:54.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533977 s, 7.7 MB/s 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:54.455 18:55:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:10:54.713 /dev/nbd3 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.713 1+0 records in 00:10:54.713 1+0 records out 00:10:54.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402213 s, 10.2 MB/s 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:54.713 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:10:54.971 /dev/nbd4 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd4 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd4 /proc/partitions 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.971 1+0 records in 00:10:54.971 1+0 records out 00:10:54.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578747 s, 7.1 MB/s 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:54.971 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:10:55.228 /dev/nbd5 00:10:55.228 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:10:55.228 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:10:55.228 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd5 00:10:55.228 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd5 /proc/partitions 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.229 1+0 records in 00:10:55.229 1+0 records out 00:10:55.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545791 s, 7.5 MB/s 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:55.229 18:55:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:10:55.487 /dev/nbd6 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd6 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd6 /proc/partitions 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.487 1+0 records in 00:10:55.487 1+0 records out 00:10:55.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040614 s, 10.1 MB/s 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:55.487 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:10:55.745 /dev/nbd7 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd7 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd7 /proc/partitions 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.745 1+0 records in 00:10:55.745 1+0 records out 00:10:55.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887084 s, 4.6 MB/s 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:55.745 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:10:56.003 /dev/nbd8 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd8 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd8 /proc/partitions 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.003 1+0 records in 00:10:56.003 1+0 records out 00:10:56.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533634 s, 7.7 MB/s 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:56.003 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:10:56.261 /dev/nbd9 00:10:56.261 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd9 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd9 /proc/partitions 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.262 1+0 records in 00:10:56.262 1+0 records out 00:10:56.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760132 s, 5.4 MB/s 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.262 18:55:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd0", 00:10:56.520 "bdev_name": "Malloc0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd1", 00:10:56.520 "bdev_name": "Malloc1p0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd10", 00:10:56.520 "bdev_name": "Malloc1p1" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd11", 00:10:56.520 "bdev_name": "Malloc2p0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd12", 00:10:56.520 "bdev_name": "Malloc2p1" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd13", 00:10:56.520 "bdev_name": "Malloc2p2" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd14", 00:10:56.520 "bdev_name": "Malloc2p3" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd15", 00:10:56.520 "bdev_name": "Malloc2p4" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd2", 00:10:56.520 "bdev_name": "Malloc2p5" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd3", 00:10:56.520 "bdev_name": "Malloc2p6" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd4", 00:10:56.520 "bdev_name": "Malloc2p7" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd5", 00:10:56.520 "bdev_name": "TestPT" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd6", 00:10:56.520 "bdev_name": "raid0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd7", 00:10:56.520 "bdev_name": "concat0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd8", 00:10:56.520 "bdev_name": "raid1" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd9", 00:10:56.520 "bdev_name": "AIO0" 00:10:56.520 } 00:10:56.520 ]' 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd0", 00:10:56.520 "bdev_name": "Malloc0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd1", 00:10:56.520 "bdev_name": "Malloc1p0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd10", 00:10:56.520 "bdev_name": "Malloc1p1" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd11", 00:10:56.520 "bdev_name": "Malloc2p0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd12", 00:10:56.520 "bdev_name": "Malloc2p1" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd13", 00:10:56.520 "bdev_name": "Malloc2p2" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd14", 00:10:56.520 "bdev_name": "Malloc2p3" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd15", 00:10:56.520 "bdev_name": "Malloc2p4" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd2", 00:10:56.520 "bdev_name": "Malloc2p5" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd3", 00:10:56.520 "bdev_name": "Malloc2p6" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd4", 00:10:56.520 "bdev_name": "Malloc2p7" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd5", 00:10:56.520 "bdev_name": "TestPT" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd6", 00:10:56.520 "bdev_name": "raid0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd7", 00:10:56.520 "bdev_name": "concat0" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd8", 00:10:56.520 "bdev_name": "raid1" 00:10:56.520 }, 00:10:56.520 { 00:10:56.520 "nbd_device": "/dev/nbd9", 00:10:56.520 "bdev_name": "AIO0" 00:10:56.520 } 00:10:56.520 ]' 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:56.520 /dev/nbd1 00:10:56.520 /dev/nbd10 00:10:56.520 /dev/nbd11 00:10:56.520 /dev/nbd12 00:10:56.520 /dev/nbd13 00:10:56.520 /dev/nbd14 00:10:56.520 /dev/nbd15 00:10:56.520 /dev/nbd2 00:10:56.520 /dev/nbd3 00:10:56.520 /dev/nbd4 00:10:56.520 /dev/nbd5 00:10:56.520 /dev/nbd6 00:10:56.520 /dev/nbd7 00:10:56.520 /dev/nbd8 00:10:56.520 /dev/nbd9' 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:56.520 /dev/nbd1 00:10:56.520 /dev/nbd10 00:10:56.520 /dev/nbd11 00:10:56.520 /dev/nbd12 00:10:56.520 /dev/nbd13 00:10:56.520 /dev/nbd14 00:10:56.520 /dev/nbd15 00:10:56.520 /dev/nbd2 00:10:56.520 /dev/nbd3 00:10:56.520 /dev/nbd4 00:10:56.520 /dev/nbd5 00:10:56.520 /dev/nbd6 00:10:56.520 /dev/nbd7 00:10:56.520 /dev/nbd8 00:10:56.520 /dev/nbd9' 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:56.520 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:10:56.521 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:56.521 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:56.521 256+0 records in 00:10:56.521 256+0 records out 00:10:56.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101578 s, 103 MB/s 00:10:56.521 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:56.521 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:56.778 256+0 records in 00:10:56.778 256+0 records out 00:10:56.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164901 s, 6.4 MB/s 00:10:56.778 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:56.778 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:56.778 256+0 records in 00:10:56.778 256+0 records out 00:10:56.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169354 s, 6.2 MB/s 00:10:56.779 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:56.779 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:57.036 256+0 records in 00:10:57.036 256+0 records out 00:10:57.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169357 s, 6.2 MB/s 00:10:57.036 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.036 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:57.294 256+0 records in 00:10:57.294 256+0 records out 00:10:57.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168997 s, 6.2 MB/s 00:10:57.294 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.294 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:57.294 256+0 records in 00:10:57.294 256+0 records out 00:10:57.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169194 s, 6.2 MB/s 00:10:57.294 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.294 18:55:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:57.551 256+0 records in 00:10:57.551 256+0 records out 00:10:57.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169482 s, 6.2 MB/s 00:10:57.551 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.551 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:57.809 256+0 records in 00:10:57.809 256+0 records out 00:10:57.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168807 s, 6.2 MB/s 00:10:57.809 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.809 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:10:57.809 256+0 records in 00:10:57.809 256+0 records out 00:10:57.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169382 s, 6.2 MB/s 00:10:57.809 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.809 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:10:58.066 256+0 records in 00:10:58.066 256+0 records out 00:10:58.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16578 s, 6.3 MB/s 00:10:58.066 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.066 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:10:58.066 256+0 records in 00:10:58.066 256+0 records out 00:10:58.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126576 s, 8.3 MB/s 00:10:58.066 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.066 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:10:58.324 256+0 records in 00:10:58.324 256+0 records out 00:10:58.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0929818 s, 11.3 MB/s 00:10:58.324 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.324 18:55:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:10:58.582 256+0 records in 00:10:58.582 256+0 records out 00:10:58.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167888 s, 6.2 MB/s 00:10:58.582 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.582 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:10:58.582 256+0 records in 00:10:58.582 256+0 records out 00:10:58.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167837 s, 6.2 MB/s 00:10:58.582 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.582 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:10:58.839 256+0 records in 00:10:58.839 256+0 records out 00:10:58.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169471 s, 6.2 MB/s 00:10:58.839 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.839 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:10:58.839 256+0 records in 00:10:58.839 256+0 records out 00:10:58.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106936 s, 9.8 MB/s 00:10:58.839 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.839 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:10:59.098 256+0 records in 00:10:59.098 256+0 records out 00:10:59.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148969 s, 7.0 MB/s 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.098 18:55:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.356 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.614 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:59.871 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.872 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.129 18:55:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.429 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.706 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.964 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.222 18:55:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.480 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.738 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.995 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.253 18:55:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:02.253 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:02.253 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.253 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.253 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.511 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.769 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.028 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:03.286 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:03.286 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:03.286 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:03.286 18:55:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:03.286 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:03.544 malloc_lvol_verify 00:11:03.544 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:03.801 b6c26edc-25fc-448c-84f9-1fb6480870b2 00:11:03.801 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:04.059 8bc78b3e-3693-4697-ab15-b038e43201a1 00:11:04.059 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:04.317 /dev/nbd0 00:11:04.317 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:04.317 mke2fs 1.46.5 (30-Dec-2021) 00:11:04.317 Discarding device blocks: 0/4096 done 00:11:04.317 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:04.317 00:11:04.318 Allocating group tables: 0/1 done 00:11:04.318 Writing inode tables: 0/1 done 00:11:04.318 Creating journal (1024 blocks): done 00:11:04.318 Writing superblocks and filesystem accounting information: 0/1 done 00:11:04.318 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.318 18:55:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1595405 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 1595405 ']' 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 1595405 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1595405 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1595405' 00:11:04.575 killing process with pid 1595405 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # kill 1595405 00:11:04.575 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@973 -- # wait 1595405 00:11:04.833 18:55:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:11:04.833 00:11:04.833 real 0m22.736s 00:11:04.833 user 0m27.818s 00:11:04.833 sys 0m13.178s 00:11:04.833 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:04.833 18:55:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:04.833 ************************************ 00:11:04.833 END TEST bdev_nbd 00:11:04.833 ************************************ 00:11:05.092 18:55:19 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:11:05.092 18:55:19 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:11:05.092 18:55:19 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:11:05.092 18:55:19 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:11:05.092 18:55:19 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:05.092 18:55:19 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:05.092 18:55:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:05.092 ************************************ 00:11:05.092 START TEST bdev_fio 00:11:05.092 ************************************ 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:05.092 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:11:05.092 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:05.093 18:55:19 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:05.093 ************************************ 00:11:05.093 START TEST bdev_fio_rw_verify 00:11:05.093 ************************************ 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:11:05.093 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:05.386 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:11:05.386 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:11:05.386 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:05.386 18:55:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:05.654 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:05.654 fio-3.35 00:11:05.654 Starting 16 threads 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:05.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.654 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:05.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.655 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:17.852 00:11:17.852 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=1600310: Mon Jun 10 18:55:30 2024 00:11:17.852 read: IOPS=102k, BW=398MiB/s (417MB/s)(3976MiB/10002msec) 00:11:17.852 slat (usec): min=2, max=1372, avg=31.40, stdev=14.67 00:11:17.852 clat (usec): min=7, max=1708, avg=257.66, stdev=128.93 00:11:17.852 lat (usec): min=14, max=1740, avg=289.06, stdev=137.67 00:11:17.852 clat percentiles (usec): 00:11:17.852 | 50.000th=[ 249], 99.000th=[ 553], 99.900th=[ 660], 99.990th=[ 783], 00:11:17.852 | 99.999th=[ 1237] 00:11:17.852 write: IOPS=159k, BW=620MiB/s (650MB/s)(6103MiB/9849msec); 0 zone resets 00:11:17.852 slat (usec): min=5, max=370, avg=43.56, stdev=14.97 00:11:17.852 clat (usec): min=11, max=4017, avg=307.14, stdev=148.01 00:11:17.852 lat (usec): min=27, max=4049, avg=350.70, stdev=156.50 00:11:17.852 clat percentiles (usec): 00:11:17.852 | 50.000th=[ 293], 99.000th=[ 652], 99.900th=[ 955], 99.990th=[ 1156], 00:11:17.852 | 99.999th=[ 2073] 00:11:17.852 bw ( KiB/s): min=514344, max=847763, per=99.05%, avg=628478.84, stdev=5872.45, samples=304 00:11:17.852 iops : min=128586, max=211939, avg=157119.58, stdev=1468.09, samples=304 00:11:17.852 lat (usec) : 10=0.01%, 20=0.05%, 50=1.40%, 100=7.39%, 250=35.06% 00:11:17.852 lat (usec) : 500=48.08%, 750=7.82%, 1000=0.15% 00:11:17.852 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:11:17.852 cpu : usr=99.26%, sys=0.36%, ctx=658, majf=0, minf=1915 00:11:17.852 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.852 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.852 issued rwts: total=1017860,1562372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:17.852 00:11:17.852 Run status group 0 (all jobs): 00:11:17.852 READ: bw=398MiB/s (417MB/s), 398MiB/s-398MiB/s (417MB/s-417MB/s), io=3976MiB (4169MB), run=10002-10002msec 00:11:17.852 WRITE: bw=620MiB/s (650MB/s), 620MiB/s-620MiB/s (650MB/s-650MB/s), io=6103MiB (6399MB), run=9849-9849msec 00:11:17.852 00:11:17.852 real 0m11.654s 00:11:17.852 user 2m52.111s 00:11:17.852 sys 0m1.572s 00:11:17.852 18:55:31 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:17.852 18:55:31 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:17.852 ************************************ 00:11:17.852 END TEST bdev_fio_rw_verify 00:11:17.852 ************************************ 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:11:17.852 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:17.854 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c81d8ef-daed-49ea-8f3c-2e6cf243c46d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c81d8ef-daed-49ea-8f3c-2e6cf243c46d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "28ec116a-448c-5da4-9314-7e441760cf4b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "28ec116a-448c-5da4-9314-7e441760cf4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "231d4d40-5d15-5a76-976c-464cf999810b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "231d4d40-5d15-5a76-976c-464cf999810b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "7dbb2905-1cb4-5248-93a3-4b73b1cb0e32"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dbb2905-1cb4-5248-93a3-4b73b1cb0e32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "98cf50d9-260e-5ee0-8afa-de99558233b1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "98cf50d9-260e-5ee0-8afa-de99558233b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "04e177e8-52c7-5193-aaf1-4a70cfcd8e63"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "04e177e8-52c7-5193-aaf1-4a70cfcd8e63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "18442cd2-c439-5e8e-adfa-5ae74687fb84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "18442cd2-c439-5e8e-adfa-5ae74687fb84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "21890405-61e5-5d18-b07b-878124ab58ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21890405-61e5-5d18-b07b-878124ab58ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dce0d170-794c-5f55-b279-12b711a32485"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dce0d170-794c-5f55-b279-12b711a32485",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9af559cb-0c9e-5cb3-b3b0-3c94d0cb4577"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9af559cb-0c9e-5cb3-b3b0-3c94d0cb4577",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "be5ce2d8-4443-5154-b4c3-15393943c642"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "be5ce2d8-4443-5154-b4c3-15393943c642",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "14b983dc-5823-56a6-acee-fcae92a3e529"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "14b983dc-5823-56a6-acee-fcae92a3e529",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "28f2f200-7ca8-4385-973a-d3de6428f680"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "28f2f200-7ca8-4385-973a-d3de6428f680",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "28f2f200-7ca8-4385-973a-d3de6428f680",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0a1d8d02-78bf-4af5-bf9c-0a2fa1ee4c68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f285b4a8-e823-47cd-b938-d7356ac166bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "07f5941b-3e7e-465e-afdf-ba3102fcdcf5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1acf5194-94e7-4e77-b724-d7ea9c16daa1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3a1e907b-6c57-4f17-932b-f68d198bc69d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c57c9bb7-d4cc-4268-b783-877505bca13d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "7bcfc7bf-c8fa-4550-9e3e-1809694b1637"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "7bcfc7bf-c8fa-4550-9e3e-1809694b1637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:17.854 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:11:17.854 Malloc1p0 00:11:17.854 Malloc1p1 00:11:17.854 Malloc2p0 00:11:17.854 Malloc2p1 00:11:17.854 Malloc2p2 00:11:17.854 Malloc2p3 00:11:17.854 Malloc2p4 00:11:17.854 Malloc2p5 00:11:17.854 Malloc2p6 00:11:17.854 Malloc2p7 00:11:17.854 TestPT 00:11:17.854 raid0 00:11:17.854 concat0 ]] 00:11:17.854 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c81d8ef-daed-49ea-8f3c-2e6cf243c46d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c81d8ef-daed-49ea-8f3c-2e6cf243c46d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "28ec116a-448c-5da4-9314-7e441760cf4b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "28ec116a-448c-5da4-9314-7e441760cf4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "231d4d40-5d15-5a76-976c-464cf999810b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "231d4d40-5d15-5a76-976c-464cf999810b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "7dbb2905-1cb4-5248-93a3-4b73b1cb0e32"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7dbb2905-1cb4-5248-93a3-4b73b1cb0e32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "98cf50d9-260e-5ee0-8afa-de99558233b1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "98cf50d9-260e-5ee0-8afa-de99558233b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "04e177e8-52c7-5193-aaf1-4a70cfcd8e63"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "04e177e8-52c7-5193-aaf1-4a70cfcd8e63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "18442cd2-c439-5e8e-adfa-5ae74687fb84"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "18442cd2-c439-5e8e-adfa-5ae74687fb84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "21890405-61e5-5d18-b07b-878124ab58ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "21890405-61e5-5d18-b07b-878124ab58ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dce0d170-794c-5f55-b279-12b711a32485"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dce0d170-794c-5f55-b279-12b711a32485",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "9af559cb-0c9e-5cb3-b3b0-3c94d0cb4577"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9af559cb-0c9e-5cb3-b3b0-3c94d0cb4577",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "be5ce2d8-4443-5154-b4c3-15393943c642"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "be5ce2d8-4443-5154-b4c3-15393943c642",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "14b983dc-5823-56a6-acee-fcae92a3e529"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "14b983dc-5823-56a6-acee-fcae92a3e529",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "28f2f200-7ca8-4385-973a-d3de6428f680"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "28f2f200-7ca8-4385-973a-d3de6428f680",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "28f2f200-7ca8-4385-973a-d3de6428f680",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0a1d8d02-78bf-4af5-bf9c-0a2fa1ee4c68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f285b4a8-e823-47cd-b938-d7356ac166bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4e0b8a13-5034-4fc5-b6c1-9966bb4a4924",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "07f5941b-3e7e-465e-afdf-ba3102fcdcf5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1acf5194-94e7-4e77-b724-d7ea9c16daa1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa9d3ac2-cb1b-4cb8-aa61-f946763cf4f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3a1e907b-6c57-4f17-932b-f68d198bc69d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c57c9bb7-d4cc-4268-b783-877505bca13d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "7bcfc7bf-c8fa-4550-9e3e-1809694b1637"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "7bcfc7bf-c8fa-4550-9e3e-1809694b1637",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:17.855 18:55:31 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:17.855 ************************************ 00:11:17.855 START TEST bdev_fio_trim 00:11:17.855 ************************************ 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local sanitizers 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # shift 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # local asan_lib= 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libasan 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:11:17.855 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:17.856 18:55:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:17.856 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:17.856 fio-3.35 00:11:17.856 Starting 14 threads 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:17.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.856 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:30.057 00:11:30.057 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=1602481: Mon Jun 10 18:55:42 2024 00:11:30.057 write: IOPS=139k, BW=543MiB/s (569MB/s)(5429MiB/10001msec); 0 zone resets 00:11:30.057 slat (usec): min=3, max=655, avg=35.75, stdev=11.03 00:11:30.057 clat (usec): min=32, max=3548, avg=251.69, stdev=88.63 00:11:30.057 lat (usec): min=50, max=3578, avg=287.44, stdev=93.15 00:11:30.057 clat percentiles (usec): 00:11:30.057 | 50.000th=[ 243], 99.000th=[ 469], 99.900th=[ 519], 99.990th=[ 603], 00:11:30.057 | 99.999th=[ 881] 00:11:30.057 bw ( KiB/s): min=490522, max=682506, per=100.00%, avg=558101.26, stdev=4317.36, samples=266 00:11:30.057 iops : min=122627, max=170623, avg=139523.68, stdev=1079.33, samples=266 00:11:30.057 trim: IOPS=139k, BW=543MiB/s (569MB/s)(5429MiB/10001msec); 0 zone resets 00:11:30.057 slat (usec): min=4, max=3220, avg=24.43, stdev= 7.66 00:11:30.057 clat (usec): min=11, max=3578, avg=283.52, stdev=97.74 00:11:30.057 lat (usec): min=26, max=3598, avg=307.94, stdev=101.16 00:11:30.057 clat percentiles (usec): 00:11:30.057 | 50.000th=[ 277], 99.000th=[ 515], 99.900th=[ 570], 99.990th=[ 660], 00:11:30.057 | 99.999th=[ 955] 00:11:30.057 bw ( KiB/s): min=490522, max=682514, per=100.00%, avg=558101.68, stdev=4317.44, samples=266 00:11:30.057 iops : min=122627, max=170625, avg=139523.79, stdev=1079.35, samples=266 00:11:30.057 lat (usec) : 20=0.01%, 50=0.05%, 100=1.78%, 250=45.16%, 500=52.08% 00:11:30.057 lat (usec) : 750=0.93%, 1000=0.01% 00:11:30.057 lat (msec) : 2=0.01%, 4=0.01% 00:11:30.057 cpu : usr=99.63%, sys=0.01%, ctx=597, majf=0, minf=836 00:11:30.057 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.057 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.057 issued rwts: total=0,1389756,1389762,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.057 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:30.057 00:11:30.057 Run status group 0 (all jobs): 00:11:30.057 WRITE: bw=543MiB/s (569MB/s), 543MiB/s-543MiB/s (569MB/s-569MB/s), io=5429MiB (5692MB), run=10001-10001msec 00:11:30.057 TRIM: bw=543MiB/s (569MB/s), 543MiB/s-543MiB/s (569MB/s-569MB/s), io=5429MiB (5692MB), run=10001-10001msec 00:11:30.057 00:11:30.057 real 0m11.677s 00:11:30.057 user 2m33.362s 00:11:30.057 sys 0m0.858s 00:11:30.057 18:55:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:30.057 18:55:43 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:11:30.057 ************************************ 00:11:30.057 END TEST bdev_fio_trim 00:11:30.057 ************************************ 00:11:30.057 18:55:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:11:30.057 18:55:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:30.057 18:55:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:11:30.057 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:30.057 18:55:43 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:11:30.057 00:11:30.057 real 0m23.714s 00:11:30.057 user 5m25.683s 00:11:30.057 sys 0m2.634s 00:11:30.057 18:55:43 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:30.058 18:55:43 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:30.058 ************************************ 00:11:30.058 END TEST bdev_fio 00:11:30.058 ************************************ 00:11:30.058 18:55:43 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:30.058 18:55:43 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:30.058 18:55:43 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:11:30.058 18:55:43 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:30.058 18:55:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:30.058 ************************************ 00:11:30.058 START TEST bdev_verify 00:11:30.058 ************************************ 00:11:30.058 18:55:43 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:30.058 [2024-06-10 18:55:43.505064] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:11:30.058 [2024-06-10 18:55:43.505120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604357 ] 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:30.058 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:30.058 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:30.058 [2024-06-10 18:55:43.639896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:30.058 [2024-06-10 18:55:43.728011] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.058 [2024-06-10 18:55:43.728016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.058 [2024-06-10 18:55:43.875628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:30.058 [2024-06-10 18:55:43.875680] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:30.058 [2024-06-10 18:55:43.875693] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:30.058 [2024-06-10 18:55:43.883635] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:30.058 [2024-06-10 18:55:43.883661] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:30.058 [2024-06-10 18:55:43.891646] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:30.058 [2024-06-10 18:55:43.891668] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:30.058 [2024-06-10 18:55:43.963167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:30.058 [2024-06-10 18:55:43.963218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.058 [2024-06-10 18:55:43.963233] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x121c990 00:11:30.058 [2024-06-10 18:55:43.963244] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.058 [2024-06-10 18:55:43.964537] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.058 [2024-06-10 18:55:43.964566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:30.058 Running I/O for 5 seconds... 00:11:35.322 00:11:35.322 Latency(us) 00:11:35.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.322 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x1000 00:11:35.322 Malloc0 : 5.18 1038.67 4.06 0.00 0.00 122955.09 458.75 432852.17 00:11:35.322 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x1000 length 0x1000 00:11:35.322 Malloc0 : 5.17 1014.56 3.96 0.00 0.00 125883.90 563.61 473117.49 00:11:35.322 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x800 00:11:35.322 Malloc1p0 : 5.18 543.83 2.12 0.00 0.00 233981.20 3198.16 236558.75 00:11:35.322 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x800 length 0x800 00:11:35.322 Malloc1p0 : 5.17 544.17 2.13 0.00 0.00 233832.52 3211.26 236558.75 00:11:35.322 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x800 00:11:35.322 Malloc1p1 : 5.18 543.61 2.12 0.00 0.00 233320.21 3381.66 233203.30 00:11:35.322 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x800 length 0x800 00:11:35.322 Malloc1p1 : 5.18 543.98 2.12 0.00 0.00 233162.11 3381.66 233203.30 00:11:35.322 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p0 : 5.18 543.37 2.12 0.00 0.00 232662.35 3316.12 229847.86 00:11:35.322 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p0 : 5.18 543.75 2.12 0.00 0.00 232496.54 3355.44 229847.86 00:11:35.322 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p1 : 5.18 543.14 2.12 0.00 0.00 232049.72 3303.01 224814.69 00:11:35.322 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p1 : 5.18 543.52 2.12 0.00 0.00 231860.98 3316.12 224814.69 00:11:35.322 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p2 : 5.19 542.90 2.12 0.00 0.00 231417.28 3237.48 221459.25 00:11:35.322 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p2 : 5.18 543.29 2.12 0.00 0.00 231223.24 3250.59 223136.97 00:11:35.322 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p3 : 5.19 542.66 2.12 0.00 0.00 230805.22 3250.59 216426.09 00:11:35.322 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p3 : 5.19 543.06 2.12 0.00 0.00 230596.20 3289.91 218103.81 00:11:35.322 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p4 : 5.19 542.42 2.12 0.00 0.00 230188.25 3329.23 214748.36 00:11:35.322 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p4 : 5.19 542.82 2.12 0.00 0.00 229986.86 3355.44 216426.09 00:11:35.322 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p5 : 5.19 542.22 2.12 0.00 0.00 229579.79 3224.37 212231.78 00:11:35.322 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p5 : 5.19 542.58 2.12 0.00 0.00 229391.39 3250.59 213070.64 00:11:35.322 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p6 : 5.20 542.01 2.12 0.00 0.00 228950.88 3329.23 207198.62 00:11:35.322 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p6 : 5.27 558.12 2.18 0.00 0.00 222424.14 3329.23 208037.48 00:11:35.322 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x200 00:11:35.322 Malloc2p7 : 5.27 558.74 2.18 0.00 0.00 221530.68 3250.59 205520.90 00:11:35.322 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x200 length 0x200 00:11:35.322 Malloc2p7 : 5.28 557.73 2.18 0.00 0.00 221898.63 3263.69 206359.76 00:11:35.322 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x1000 00:11:35.322 TestPT : 5.27 537.75 2.10 0.00 0.00 228864.11 14680.06 205520.90 00:11:35.322 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x1000 length 0x1000 00:11:35.322 TestPT : 5.28 533.12 2.08 0.00 0.00 231335.19 18245.22 276824.06 00:11:35.322 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x2000 00:11:35.322 raid0 : 5.27 558.39 2.18 0.00 0.00 220145.83 3486.52 188743.68 00:11:35.322 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x2000 length 0x2000 00:11:35.322 raid0 : 5.28 557.06 2.18 0.00 0.00 220680.05 3486.52 180355.07 00:11:35.322 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x2000 00:11:35.322 concat0 : 5.27 558.15 2.18 0.00 0.00 219541.78 3355.44 181193.93 00:11:35.322 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x2000 length 0x2000 00:11:35.322 concat0 : 5.29 556.87 2.18 0.00 0.00 220039.47 3407.87 179516.21 00:11:35.322 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x1000 00:11:35.322 raid1 : 5.28 557.75 2.18 0.00 0.00 218975.97 3748.66 180355.07 00:11:35.322 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x1000 length 0x1000 00:11:35.322 raid1 : 5.29 556.68 2.17 0.00 0.00 219402.77 3696.23 182871.65 00:11:35.322 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x0 length 0x4e2 00:11:35.322 AIO0 : 5.28 557.40 2.18 0.00 0.00 218449.91 1585.97 191260.26 00:11:35.322 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:35.322 Verification LBA range: start 0x4e2 length 0x4e2 00:11:35.322 AIO0 : 5.29 556.54 2.17 0.00 0.00 218762.80 1605.63 193776.84 00:11:35.322 =================================================================================================================== 00:11:35.322 Total : 18490.89 72.23 0.00 0.00 215829.27 458.75 473117.49 00:11:35.322 00:11:35.322 real 0m6.417s 00:11:35.322 user 0m11.953s 00:11:35.322 sys 0m0.373s 00:11:35.322 18:55:49 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:35.322 18:55:49 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:35.322 ************************************ 00:11:35.322 END TEST bdev_verify 00:11:35.322 ************************************ 00:11:35.322 18:55:49 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:35.322 18:55:49 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:11:35.322 18:55:49 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:35.322 18:55:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:35.322 ************************************ 00:11:35.322 START TEST bdev_verify_big_io 00:11:35.322 ************************************ 00:11:35.322 18:55:49 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:35.322 [2024-06-10 18:55:50.015443] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:11:35.322 [2024-06-10 18:55:50.015502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605562 ] 00:11:35.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:35.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:35.580 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:35.580 [2024-06-10 18:55:50.149516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.580 [2024-06-10 18:55:50.232277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.580 [2024-06-10 18:55:50.232282] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.837 [2024-06-10 18:55:50.375872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:35.837 [2024-06-10 18:55:50.375926] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:35.837 [2024-06-10 18:55:50.375939] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:35.837 [2024-06-10 18:55:50.383883] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:35.837 [2024-06-10 18:55:50.383907] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:35.837 [2024-06-10 18:55:50.391897] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:35.837 [2024-06-10 18:55:50.391919] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:35.837 [2024-06-10 18:55:50.463194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:35.837 [2024-06-10 18:55:50.463241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.837 [2024-06-10 18:55:50.463257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb6c990 00:11:35.837 [2024-06-10 18:55:50.463269] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.837 [2024-06-10 18:55:50.464556] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.837 [2024-06-10 18:55:50.464590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:36.094 [2024-06-10 18:55:50.642379] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:36.094 [2024-06-10 18:55:50.643529] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:36.094 [2024-06-10 18:55:50.645257] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:36.094 [2024-06-10 18:55:50.646403] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:36.094 [2024-06-10 18:55:50.648177] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:36.094 [2024-06-10 18:55:50.649415] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:36.094 [2024-06-10 18:55:50.651229] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.652704] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.653633] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.655027] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.655917] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.657296] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.658197] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.659594] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.660485] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.661908] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:36.095 [2024-06-10 18:55:50.682219] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:36.095 [2024-06-10 18:55:50.683926] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:36.095 Running I/O for 5 seconds... 00:11:44.198 00:11:44.198 Latency(us) 00:11:44.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.198 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x100 00:11:44.198 Malloc0 : 5.72 156.74 9.80 0.00 0.00 801297.02 789.71 2187748.97 00:11:44.198 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x100 length 0x100 00:11:44.198 Malloc0 : 5.72 156.68 9.79 0.00 0.00 801173.40 779.88 2496449.74 00:11:44.198 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x80 00:11:44.198 Malloc1p0 : 6.17 88.87 5.55 0.00 0.00 1306677.04 2254.44 2576980.38 00:11:44.198 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x80 length 0x80 00:11:44.198 Malloc1p0 : 6.78 37.78 2.36 0.00 0.00 3019601.83 1409.02 5019743.03 00:11:44.198 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x80 00:11:44.198 Malloc1p1 : 6.56 36.57 2.29 0.00 0.00 3069442.58 1369.70 5153960.76 00:11:44.198 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x80 length 0x80 00:11:44.198 Malloc1p1 : 6.78 37.76 2.36 0.00 0.00 2923948.84 1363.15 4858681.75 00:11:44.198 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x20 00:11:44.198 Malloc2p0 : 6.17 25.94 1.62 0.00 0.00 1098813.76 579.99 1879048.19 00:11:44.198 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x20 length 0x20 00:11:44.198 Malloc2p0 : 6.17 28.50 1.78 0.00 0.00 996805.44 576.72 1603901.85 00:11:44.198 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x20 00:11:44.198 Malloc2p1 : 6.17 25.93 1.62 0.00 0.00 1089924.66 560.33 1852204.65 00:11:44.198 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x20 length 0x20 00:11:44.198 Malloc2p1 : 6.18 28.49 1.78 0.00 0.00 988149.51 583.27 1577058.30 00:11:44.198 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x20 00:11:44.198 Malloc2p2 : 6.17 25.93 1.62 0.00 0.00 1081112.32 566.89 1825361.10 00:11:44.198 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x20 length 0x20 00:11:44.198 Malloc2p2 : 6.18 28.48 1.78 0.00 0.00 980051.03 579.99 1556925.64 00:11:44.198 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x20 00:11:44.198 Malloc2p3 : 6.17 25.92 1.62 0.00 0.00 1071982.87 573.44 1811939.33 00:11:44.198 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x20 length 0x20 00:11:44.198 Malloc2p3 : 6.18 28.47 1.78 0.00 0.00 971462.13 573.44 1530082.10 00:11:44.198 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x20 00:11:44.198 Malloc2p4 : 6.17 25.92 1.62 0.00 0.00 1062846.76 563.61 1785095.78 00:11:44.198 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x20 length 0x20 00:11:44.198 Malloc2p4 : 6.18 28.47 1.78 0.00 0.00 962727.31 576.72 1509949.44 00:11:44.198 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.198 Verification LBA range: start 0x0 length 0x20 00:11:44.199 Malloc2p5 : 6.17 25.91 1.62 0.00 0.00 1053625.35 563.61 1758252.24 00:11:44.199 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x20 length 0x20 00:11:44.199 Malloc2p5 : 6.18 28.46 1.78 0.00 0.00 954287.71 566.89 1483105.89 00:11:44.199 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x20 00:11:44.199 Malloc2p6 : 6.18 25.91 1.62 0.00 0.00 1044292.43 573.44 1731408.69 00:11:44.199 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x20 length 0x20 00:11:44.199 Malloc2p6 : 6.30 30.47 1.90 0.00 0.00 889101.93 583.27 1462973.24 00:11:44.199 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x20 00:11:44.199 Malloc2p7 : 6.25 28.14 1.76 0.00 0.00 962211.23 566.89 1711276.03 00:11:44.199 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x20 length 0x20 00:11:44.199 Malloc2p7 : 6.30 30.46 1.90 0.00 0.00 881114.36 576.72 1436129.69 00:11:44.199 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x100 00:11:44.199 TestPT : 6.78 38.08 2.38 0.00 0.00 2687675.86 101921.59 3999688.29 00:11:44.199 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x100 length 0x100 00:11:44.199 TestPT : 6.81 37.61 2.35 0.00 0.00 2751666.59 85144.37 3704409.29 00:11:44.199 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x200 00:11:44.199 raid0 : 6.78 42.46 2.65 0.00 0.00 2359152.58 1415.58 4643933.39 00:11:44.199 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x200 length 0x200 00:11:44.199 raid0 : 6.81 44.64 2.79 0.00 0.00 2269014.16 1428.68 4294967.30 00:11:44.199 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x200 00:11:44.199 concat0 : 6.78 49.52 3.10 0.00 0.00 2002189.04 1415.58 4482872.12 00:11:44.199 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x200 length 0x200 00:11:44.199 concat0 : 6.81 49.36 3.08 0.00 0.00 2016394.90 1441.79 4133906.02 00:11:44.199 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x100 00:11:44.199 raid1 : 6.78 70.05 4.38 0.00 0.00 1387686.20 1835.01 4321810.84 00:11:44.199 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x100 length 0x100 00:11:44.199 raid1 : 6.78 67.95 4.25 0.00 0.00 1419404.52 1821.90 3972844.75 00:11:44.199 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x0 length 0x4e 00:11:44.199 AIO0 : 6.79 55.98 3.50 0.00 0.00 1032252.86 730.73 2738041.65 00:11:44.199 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:11:44.199 Verification LBA range: start 0x4e length 0x4e 00:11:44.199 AIO0 : 6.81 69.61 4.35 0.00 0.00 823172.65 750.39 2268279.60 00:11:44.199 =================================================================================================================== 00:11:44.199 Total : 1481.07 92.57 0.00 0.00 1422221.26 560.33 5153960.76 00:11:44.199 00:11:44.199 real 0m8.005s 00:11:44.199 user 0m15.073s 00:11:44.199 sys 0m0.388s 00:11:44.199 18:55:57 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:44.199 18:55:57 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.199 ************************************ 00:11:44.199 END TEST bdev_verify_big_io 00:11:44.199 ************************************ 00:11:44.199 18:55:58 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:44.199 18:55:58 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:11:44.199 18:55:58 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:44.199 18:55:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:44.199 ************************************ 00:11:44.199 START TEST bdev_write_zeroes 00:11:44.199 ************************************ 00:11:44.199 18:55:58 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:44.199 [2024-06-10 18:55:58.102536] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:11:44.199 [2024-06-10 18:55:58.102607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606941 ] 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:44.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:44.199 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:44.199 [2024-06-10 18:55:58.234987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.199 [2024-06-10 18:55:58.318911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.199 [2024-06-10 18:55:58.469762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.199 [2024-06-10 18:55:58.469811] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:44.199 [2024-06-10 18:55:58.469824] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:44.199 [2024-06-10 18:55:58.477768] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.199 [2024-06-10 18:55:58.477793] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.199 [2024-06-10 18:55:58.485780] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.199 [2024-06-10 18:55:58.485803] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.199 [2024-06-10 18:55:58.557074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.200 [2024-06-10 18:55:58.557119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.200 [2024-06-10 18:55:58.557134] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc241f0 00:11:44.200 [2024-06-10 18:55:58.557146] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.200 [2024-06-10 18:55:58.558411] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.200 [2024-06-10 18:55:58.558439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:44.200 Running I/O for 1 seconds... 00:11:45.132 00:11:45.132 Latency(us) 00:11:45.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.132 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.132 Malloc0 : 1.03 5481.01 21.41 0.00 0.00 23348.61 599.65 39007.03 00:11:45.132 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.132 Malloc1p0 : 1.03 5473.73 21.38 0.00 0.00 23339.71 819.20 38168.17 00:11:45.132 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.132 Malloc1p1 : 1.03 5466.48 21.35 0.00 0.00 23324.81 819.20 37329.31 00:11:45.132 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.132 Malloc2p0 : 1.03 5459.21 21.33 0.00 0.00 23306.68 812.65 36490.44 00:11:45.132 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.132 Malloc2p1 : 1.03 5452.04 21.30 0.00 0.00 23286.94 812.65 35651.58 00:11:45.132 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 Malloc2p2 : 1.05 5475.69 21.39 0.00 0.00 23141.13 825.75 34812.72 00:11:45.133 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 Malloc2p3 : 1.05 5468.54 21.36 0.00 0.00 23131.04 815.92 34183.58 00:11:45.133 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 Malloc2p4 : 1.05 5461.50 21.33 0.00 0.00 23115.75 812.65 33344.72 00:11:45.133 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 Malloc2p5 : 1.06 5454.47 21.31 0.00 0.00 23098.69 812.65 32505.86 00:11:45.133 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 Malloc2p6 : 1.06 5447.40 21.28 0.00 0.00 23084.11 812.65 31667.00 00:11:45.133 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 Malloc2p7 : 1.06 5440.43 21.25 0.00 0.00 23065.27 812.65 30828.13 00:11:45.133 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 TestPT : 1.06 5433.49 21.22 0.00 0.00 23049.16 845.41 29989.27 00:11:45.133 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 raid0 : 1.06 5425.40 21.19 0.00 0.00 23027.00 1474.56 28521.27 00:11:45.133 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 concat0 : 1.06 5417.53 21.16 0.00 0.00 22981.41 1454.90 27053.26 00:11:45.133 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 raid1 : 1.07 5407.71 21.12 0.00 0.00 22930.54 2319.97 24641.54 00:11:45.133 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:45.133 AIO0 : 1.07 5401.68 21.10 0.00 0.00 22855.99 950.27 24222.11 00:11:45.133 =================================================================================================================== 00:11:45.133 Total : 87166.30 340.49 0.00 0.00 23129.09 599.65 39007.03 00:11:45.698 00:11:45.698 real 0m2.130s 00:11:45.698 user 0m1.739s 00:11:45.698 sys 0m0.330s 00:11:45.698 18:56:00 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:45.698 18:56:00 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:45.698 ************************************ 00:11:45.698 END TEST bdev_write_zeroes 00:11:45.699 ************************************ 00:11:45.699 18:56:00 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:45.699 18:56:00 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:11:45.699 18:56:00 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:45.699 18:56:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:45.699 ************************************ 00:11:45.699 START TEST bdev_json_nonenclosed 00:11:45.699 ************************************ 00:11:45.699 18:56:00 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:45.699 [2024-06-10 18:56:00.322098] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:11:45.699 [2024-06-10 18:56:00.322151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607244 ] 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:45.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:45.699 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:45.962 [2024-06-10 18:56:00.456511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.962 [2024-06-10 18:56:00.539970] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.962 [2024-06-10 18:56:00.540034] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:45.962 [2024-06-10 18:56:00.540052] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:45.962 [2024-06-10 18:56:00.540064] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:45.962 00:11:45.962 real 0m0.362s 00:11:45.962 user 0m0.216s 00:11:45.962 sys 0m0.143s 00:11:45.962 18:56:00 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:45.962 18:56:00 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:45.962 ************************************ 00:11:45.962 END TEST bdev_json_nonenclosed 00:11:45.962 ************************************ 00:11:45.962 18:56:00 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:45.962 18:56:00 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:11:45.962 18:56:00 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:45.962 18:56:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:45.962 ************************************ 00:11:45.962 START TEST bdev_json_nonarray 00:11:45.962 ************************************ 00:11:46.311 18:56:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:46.311 [2024-06-10 18:56:00.774971] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:11:46.311 [2024-06-10 18:56:00.775035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607392 ] 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:46.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.311 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:46.311 [2024-06-10 18:56:00.909887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.311 [2024-06-10 18:56:00.993173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.311 [2024-06-10 18:56:00.993246] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:46.311 [2024-06-10 18:56:00.993266] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:46.311 [2024-06-10 18:56:00.993277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.588 00:11:46.588 real 0m0.365s 00:11:46.588 user 0m0.207s 00:11:46.588 sys 0m0.155s 00:11:46.588 18:56:01 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:46.588 18:56:01 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:46.588 ************************************ 00:11:46.588 END TEST bdev_json_nonarray 00:11:46.588 ************************************ 00:11:46.588 18:56:01 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:11:46.588 18:56:01 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:11:46.588 18:56:01 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:46.588 18:56:01 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:46.588 18:56:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:46.588 ************************************ 00:11:46.588 START TEST bdev_qos 00:11:46.588 ************************************ 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # qos_test_suite '' 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=1607537 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 1607537' 00:11:46.588 Process qos testing pid: 1607537 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 1607537 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@830 -- # '[' -z 1607537 ']' 00:11:46.588 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.589 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:46.589 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.589 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:46.589 18:56:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:46.589 [2024-06-10 18:56:01.227252] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:11:46.589 [2024-06-10 18:56:01.227313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607537 ] 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.0 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.1 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.2 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.3 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.4 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.5 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.6 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:01.7 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.0 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.1 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.2 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.3 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.4 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.5 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.6 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b6:02.7 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.0 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.1 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.2 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.3 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.4 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.5 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.6 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:01.7 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.0 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.1 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.2 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.3 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.4 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.5 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.6 cannot be used 00:11:46.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.589 EAL: Requested device 0000:b8:02.7 cannot be used 00:11:46.846 [2024-06-10 18:56:01.351896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.846 [2024-06-10 18:56:01.435440] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@863 -- # return 0 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:47.412 Malloc_0 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_0 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local i 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.412 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:47.669 [ 00:11:47.669 { 00:11:47.669 "name": "Malloc_0", 00:11:47.669 "aliases": [ 00:11:47.669 "1cc3cd2a-7e99-498b-bb96-1eafb188db76" 00:11:47.669 ], 00:11:47.669 "product_name": "Malloc disk", 00:11:47.669 "block_size": 512, 00:11:47.669 "num_blocks": 262144, 00:11:47.669 "uuid": "1cc3cd2a-7e99-498b-bb96-1eafb188db76", 00:11:47.669 "assigned_rate_limits": { 00:11:47.669 "rw_ios_per_sec": 0, 00:11:47.669 "rw_mbytes_per_sec": 0, 00:11:47.669 "r_mbytes_per_sec": 0, 00:11:47.669 "w_mbytes_per_sec": 0 00:11:47.669 }, 00:11:47.669 "claimed": false, 00:11:47.669 "zoned": false, 00:11:47.669 "supported_io_types": { 00:11:47.669 "read": true, 00:11:47.669 "write": true, 00:11:47.669 "unmap": true, 00:11:47.669 "write_zeroes": true, 00:11:47.669 "flush": true, 00:11:47.669 "reset": true, 00:11:47.669 "compare": false, 00:11:47.669 "compare_and_write": false, 00:11:47.669 "abort": true, 00:11:47.669 "nvme_admin": false, 00:11:47.669 "nvme_io": false 00:11:47.669 }, 00:11:47.669 "memory_domains": [ 00:11:47.669 { 00:11:47.669 "dma_device_id": "system", 00:11:47.669 "dma_device_type": 1 00:11:47.669 }, 00:11:47.669 { 00:11:47.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.669 "dma_device_type": 2 00:11:47.669 } 00:11:47.669 ], 00:11:47.669 "driver_specific": {} 00:11:47.669 } 00:11:47.669 ] 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # return 0 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:47.669 Null_1 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_name=Null_1 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local i 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:47.669 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:47.669 [ 00:11:47.669 { 00:11:47.669 "name": "Null_1", 00:11:47.669 "aliases": [ 00:11:47.669 "5c81012b-417d-40ee-aa9a-401b08849c58" 00:11:47.669 ], 00:11:47.669 "product_name": "Null disk", 00:11:47.669 "block_size": 512, 00:11:47.670 "num_blocks": 262144, 00:11:47.670 "uuid": "5c81012b-417d-40ee-aa9a-401b08849c58", 00:11:47.670 "assigned_rate_limits": { 00:11:47.670 "rw_ios_per_sec": 0, 00:11:47.670 "rw_mbytes_per_sec": 0, 00:11:47.670 "r_mbytes_per_sec": 0, 00:11:47.670 "w_mbytes_per_sec": 0 00:11:47.670 }, 00:11:47.670 "claimed": false, 00:11:47.670 "zoned": false, 00:11:47.670 "supported_io_types": { 00:11:47.670 "read": true, 00:11:47.670 "write": true, 00:11:47.670 "unmap": false, 00:11:47.670 "write_zeroes": true, 00:11:47.670 "flush": false, 00:11:47.670 "reset": true, 00:11:47.670 "compare": false, 00:11:47.670 "compare_and_write": false, 00:11:47.670 "abort": true, 00:11:47.670 "nvme_admin": false, 00:11:47.670 "nvme_io": false 00:11:47.670 }, 00:11:47.670 "driver_specific": {} 00:11:47.670 } 00:11:47.670 ] 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # return 0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:47.670 18:56:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:11:47.670 Running I/O for 60 seconds... 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 69525.31 278101.23 0.00 0.00 280576.00 0.00 0.00 ' 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=69525.31 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 69525 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=69525 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=17000 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 17000 -gt 1000 ']' 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:52.933 18:56:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:52.933 ************************************ 00:11:52.933 START TEST bdev_qos_iops 00:11:52.933 ************************************ 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # run_qos_test 17000 IOPS Malloc_0 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=17000 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:52.933 18:56:07 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 17004.65 68018.61 0.00 0.00 69020.00 0.00 0.00 ' 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=17004.65 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 17004 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=17004 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=15300 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=18700 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 17004 -lt 15300 ']' 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 17004 -gt 18700 ']' 00:11:58.189 00:11:58.189 real 0m5.223s 00:11:58.189 user 0m0.114s 00:11:58.189 sys 0m0.038s 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:58.189 18:56:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:11:58.189 ************************************ 00:11:58.189 END TEST bdev_qos_iops 00:11:58.189 ************************************ 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:11:58.189 18:56:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 21173.62 84694.47 0.00 0.00 86016.00 0.00 0.00 ' 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=86016.00 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 86016 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=86016 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=8 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 8 -lt 2 ']' 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:03.454 18:56:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:03.454 ************************************ 00:12:03.454 START TEST bdev_qos_bw 00:12:03.454 ************************************ 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # run_qos_test 8 BANDWIDTH Null_1 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=8 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:12:03.454 18:56:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2048.34 8193.34 0.00 0.00 8364.00 0.00 0.00 ' 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=8364.00 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 8364 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=8364 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=8192 00:12:08.718 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=7372 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=9011 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 8364 -lt 7372 ']' 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 8364 -gt 9011 ']' 00:12:08.719 00:12:08.719 real 0m5.250s 00:12:08.719 user 0m0.114s 00:12:08.719 sys 0m0.041s 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 ************************************ 00:12:08.719 END TEST bdev_qos_bw 00:12:08.719 ************************************ 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:08.719 18:56:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:08.719 ************************************ 00:12:08.719 START TEST bdev_qos_ro_bw 00:12:08.719 ************************************ 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:12:08.719 18:56:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.98 2047.94 0.00 0.00 2060.00 0.00 0.00 ' 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2060 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2060 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:12:13.980 00:12:13.980 real 0m5.179s 00:12:13.980 user 0m0.102s 00:12:13.980 sys 0m0.047s 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:13.980 18:56:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:12:13.980 ************************************ 00:12:13.980 END TEST bdev_qos_ro_bw 00:12:13.980 ************************************ 00:12:13.980 18:56:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:13.980 18:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.980 18:56:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:14.545 00:12:14.545 Latency(us) 00:12:14.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.545 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:14.545 Malloc_0 : 26.75 23409.29 91.44 0.00 0.00 10831.34 1808.79 503316.48 00:12:14.545 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:14.545 Null_1 : 26.89 22285.90 87.05 0.00 0.00 11460.54 720.90 150156.08 00:12:14.545 =================================================================================================================== 00:12:14.545 Total : 45695.19 178.50 0.00 0.00 11139.06 720.90 503316.48 00:12:14.545 0 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 1607537 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@949 -- # '[' -z 1607537 ']' 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # kill -0 1607537 00:12:14.545 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # uname 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1607537 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1607537' 00:12:14.803 killing process with pid 1607537 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # kill 1607537 00:12:14.803 Received shutdown signal, test time was about 26.962425 seconds 00:12:14.803 00:12:14.803 Latency(us) 00:12:14.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.803 =================================================================================================================== 00:12:14.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # wait 1607537 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:12:14.803 00:12:14.803 real 0m28.375s 00:12:14.803 user 0m29.101s 00:12:14.803 sys 0m0.816s 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:14.803 18:56:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:14.803 ************************************ 00:12:14.803 END TEST bdev_qos 00:12:14.803 ************************************ 00:12:15.062 18:56:29 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:15.062 18:56:29 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:15.062 18:56:29 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:15.062 18:56:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:15.062 ************************************ 00:12:15.062 START TEST bdev_qd_sampling 00:12:15.062 ************************************ 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # qd_sampling_test_suite '' 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=1612448 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 1612448' 00:12:15.062 Process bdev QD sampling period testing pid: 1612448 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 1612448 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@830 -- # '[' -z 1612448 ']' 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:15.062 18:56:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:15.062 [2024-06-10 18:56:29.683934] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:15.062 [2024-06-10 18:56:29.683991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612448 ] 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.062 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:15.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:15.063 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:15.063 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:15.063 [2024-06-10 18:56:29.817803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:15.321 [2024-06-10 18:56:29.905991] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.321 [2024-06-10 18:56:29.905996] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@863 -- # return 0 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:15.886 Malloc_QD 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_QD 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local i 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:15.886 [ 00:12:15.886 { 00:12:15.886 "name": "Malloc_QD", 00:12:15.886 "aliases": [ 00:12:15.886 "bc2e08d6-fd4b-4bea-b8a0-234f1ee57f02" 00:12:15.886 ], 00:12:15.886 "product_name": "Malloc disk", 00:12:15.886 "block_size": 512, 00:12:15.886 "num_blocks": 262144, 00:12:15.886 "uuid": "bc2e08d6-fd4b-4bea-b8a0-234f1ee57f02", 00:12:15.886 "assigned_rate_limits": { 00:12:15.886 "rw_ios_per_sec": 0, 00:12:15.886 "rw_mbytes_per_sec": 0, 00:12:15.886 "r_mbytes_per_sec": 0, 00:12:15.886 "w_mbytes_per_sec": 0 00:12:15.886 }, 00:12:15.886 "claimed": false, 00:12:15.886 "zoned": false, 00:12:15.886 "supported_io_types": { 00:12:15.886 "read": true, 00:12:15.886 "write": true, 00:12:15.886 "unmap": true, 00:12:15.886 "write_zeroes": true, 00:12:15.886 "flush": true, 00:12:15.886 "reset": true, 00:12:15.886 "compare": false, 00:12:15.886 "compare_and_write": false, 00:12:15.886 "abort": true, 00:12:15.886 "nvme_admin": false, 00:12:15.886 "nvme_io": false 00:12:15.886 }, 00:12:15.886 "memory_domains": [ 00:12:15.886 { 00:12:15.886 "dma_device_id": "system", 00:12:15.886 "dma_device_type": 1 00:12:15.886 }, 00:12:15.886 { 00:12:15.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.886 "dma_device_type": 2 00:12:15.886 } 00:12:15.886 ], 00:12:15.886 "driver_specific": {} 00:12:15.886 } 00:12:15.886 ] 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:15.886 18:56:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # return 0 00:12:16.143 18:56:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:12:16.143 18:56:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:16.143 Running I/O for 5 seconds... 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:12:18.040 "tick_rate": 2500000000, 00:12:18.040 "ticks": 4619324274169346, 00:12:18.040 "bdevs": [ 00:12:18.040 { 00:12:18.040 "name": "Malloc_QD", 00:12:18.040 "bytes_read": 814789120, 00:12:18.040 "num_read_ops": 198916, 00:12:18.040 "bytes_written": 0, 00:12:18.040 "num_write_ops": 0, 00:12:18.040 "bytes_unmapped": 0, 00:12:18.040 "num_unmap_ops": 0, 00:12:18.040 "bytes_copied": 0, 00:12:18.040 "num_copy_ops": 0, 00:12:18.040 "read_latency_ticks": 2449904302784, 00:12:18.040 "max_read_latency_ticks": 14744716, 00:12:18.040 "min_read_latency_ticks": 266634, 00:12:18.040 "write_latency_ticks": 0, 00:12:18.040 "max_write_latency_ticks": 0, 00:12:18.040 "min_write_latency_ticks": 0, 00:12:18.040 "unmap_latency_ticks": 0, 00:12:18.040 "max_unmap_latency_ticks": 0, 00:12:18.040 "min_unmap_latency_ticks": 0, 00:12:18.040 "copy_latency_ticks": 0, 00:12:18.040 "max_copy_latency_ticks": 0, 00:12:18.040 "min_copy_latency_ticks": 0, 00:12:18.040 "io_error": {}, 00:12:18.040 "queue_depth_polling_period": 10, 00:12:18.040 "queue_depth": 512, 00:12:18.040 "io_time": 30, 00:12:18.040 "weighted_io_time": 15360 00:12:18.040 } 00:12:18.040 ] 00:12:18.040 }' 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:18.040 00:12:18.040 Latency(us) 00:12:18.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.040 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:18.040 Malloc_QD : 1.99 51172.74 199.89 0.00 0.00 4990.68 1297.61 5373.95 00:12:18.040 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:18.040 Malloc_QD : 1.99 52652.67 205.67 0.00 0.00 4850.88 910.95 5898.24 00:12:18.040 =================================================================================================================== 00:12:18.040 Total : 103825.42 405.57 0.00 0.00 4919.74 910.95 5898.24 00:12:18.040 0 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 1612448 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@949 -- # '[' -z 1612448 ']' 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # kill -0 1612448 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # uname 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:18.040 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1612448 00:12:18.298 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:18.298 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:18.298 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1612448' 00:12:18.298 killing process with pid 1612448 00:12:18.298 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # kill 1612448 00:12:18.298 Received shutdown signal, test time was about 2.076547 seconds 00:12:18.298 00:12:18.298 Latency(us) 00:12:18.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.298 =================================================================================================================== 00:12:18.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:18.298 18:56:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # wait 1612448 00:12:18.298 18:56:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:12:18.298 00:12:18.298 real 0m3.396s 00:12:18.298 user 0m6.654s 00:12:18.298 sys 0m0.424s 00:12:18.298 18:56:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:18.298 18:56:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:18.298 ************************************ 00:12:18.298 END TEST bdev_qd_sampling 00:12:18.298 ************************************ 00:12:18.557 18:56:33 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:12:18.557 18:56:33 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:18.557 18:56:33 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:18.557 18:56:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:18.557 ************************************ 00:12:18.557 START TEST bdev_error 00:12:18.557 ************************************ 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # error_test_suite '' 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=1613021 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 1613021' 00:12:18.557 Process error testing pid: 1613021 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:18.557 18:56:33 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 1613021 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # '[' -z 1613021 ']' 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:18.557 18:56:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:18.557 [2024-06-10 18:56:33.163586] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:18.557 [2024-06-10 18:56:33.163650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613021 ] 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.557 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:18.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:18.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.558 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:18.558 [2024-06-10 18:56:33.290329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.815 [2024-06-10 18:56:33.375914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@863 -- # return 0 00:12:19.381 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 Dev_1 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_1 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 [ 00:12:19.381 { 00:12:19.381 "name": "Dev_1", 00:12:19.381 "aliases": [ 00:12:19.381 "66f264b0-3ebd-43e0-80f7-cda167f12c3b" 00:12:19.381 ], 00:12:19.381 "product_name": "Malloc disk", 00:12:19.381 "block_size": 512, 00:12:19.381 "num_blocks": 262144, 00:12:19.381 "uuid": "66f264b0-3ebd-43e0-80f7-cda167f12c3b", 00:12:19.381 "assigned_rate_limits": { 00:12:19.381 "rw_ios_per_sec": 0, 00:12:19.381 "rw_mbytes_per_sec": 0, 00:12:19.381 "r_mbytes_per_sec": 0, 00:12:19.381 "w_mbytes_per_sec": 0 00:12:19.381 }, 00:12:19.381 "claimed": false, 00:12:19.381 "zoned": false, 00:12:19.381 "supported_io_types": { 00:12:19.381 "read": true, 00:12:19.381 "write": true, 00:12:19.381 "unmap": true, 00:12:19.381 "write_zeroes": true, 00:12:19.381 "flush": true, 00:12:19.381 "reset": true, 00:12:19.381 "compare": false, 00:12:19.381 "compare_and_write": false, 00:12:19.381 "abort": true, 00:12:19.381 "nvme_admin": false, 00:12:19.381 "nvme_io": false 00:12:19.381 }, 00:12:19.381 "memory_domains": [ 00:12:19.381 { 00:12:19.381 "dma_device_id": "system", 00:12:19.381 "dma_device_type": 1 00:12:19.381 }, 00:12:19.381 { 00:12:19.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.381 "dma_device_type": 2 00:12:19.381 } 00:12:19.381 ], 00:12:19.381 "driver_specific": {} 00:12:19.381 } 00:12:19.381 ] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:12:19.381 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 true 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 Dev_2 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_2 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.381 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.381 [ 00:12:19.381 { 00:12:19.381 "name": "Dev_2", 00:12:19.381 "aliases": [ 00:12:19.381 "0d827ecb-eef3-49ac-bfc2-257910a08c7b" 00:12:19.381 ], 00:12:19.381 "product_name": "Malloc disk", 00:12:19.381 "block_size": 512, 00:12:19.381 "num_blocks": 262144, 00:12:19.381 "uuid": "0d827ecb-eef3-49ac-bfc2-257910a08c7b", 00:12:19.381 "assigned_rate_limits": { 00:12:19.381 "rw_ios_per_sec": 0, 00:12:19.381 "rw_mbytes_per_sec": 0, 00:12:19.381 "r_mbytes_per_sec": 0, 00:12:19.381 "w_mbytes_per_sec": 0 00:12:19.381 }, 00:12:19.381 "claimed": false, 00:12:19.381 "zoned": false, 00:12:19.381 "supported_io_types": { 00:12:19.381 "read": true, 00:12:19.381 "write": true, 00:12:19.381 "unmap": true, 00:12:19.381 "write_zeroes": true, 00:12:19.381 "flush": true, 00:12:19.381 "reset": true, 00:12:19.381 "compare": false, 00:12:19.640 "compare_and_write": false, 00:12:19.640 "abort": true, 00:12:19.640 "nvme_admin": false, 00:12:19.640 "nvme_io": false 00:12:19.640 }, 00:12:19.640 "memory_domains": [ 00:12:19.640 { 00:12:19.640 "dma_device_id": "system", 00:12:19.640 "dma_device_type": 1 00:12:19.640 }, 00:12:19.640 { 00:12:19.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.640 "dma_device_type": 2 00:12:19.640 } 00:12:19.640 ], 00:12:19.640 "driver_specific": {} 00:12:19.640 } 00:12:19.640 ] 00:12:19.640 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.640 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:12:19.640 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:19.640 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.640 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:19.640 18:56:34 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.640 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:12:19.640 18:56:34 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:19.640 Running I/O for 5 seconds... 00:12:20.574 18:56:35 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 1613021 00:12:20.574 18:56:35 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 1613021' 00:12:20.574 Process is existed as continue on error is set. Pid: 1613021 00:12:20.574 18:56:35 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:12:20.574 18:56:35 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.574 18:56:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:20.574 18:56:35 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.574 18:56:35 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:12:20.574 18:56:35 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.574 18:56:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:20.574 18:56:35 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.574 18:56:35 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:12:20.574 Timeout while waiting for response: 00:12:20.574 00:12:20.574 00:12:24.823 00:12:24.823 Latency(us) 00:12:24.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.823 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:24.823 EE_Dev_1 : 0.90 41338.31 161.48 5.54 0.00 383.78 117.96 625.87 00:12:24.823 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:24.823 Dev_2 : 5.00 89943.13 351.34 0.00 0.00 174.69 60.21 19084.08 00:12:24.823 =================================================================================================================== 00:12:24.823 Total : 131281.44 512.82 5.54 0.00 190.71 60.21 19084.08 00:12:25.756 18:56:40 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 1613021 00:12:25.756 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@949 -- # '[' -z 1613021 ']' 00:12:25.756 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # kill -0 1613021 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # uname 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1613021 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1613021' 00:12:25.757 killing process with pid 1613021 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # kill 1613021 00:12:25.757 Received shutdown signal, test time was about 5.000000 seconds 00:12:25.757 00:12:25.757 Latency(us) 00:12:25.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.757 =================================================================================================================== 00:12:25.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # wait 1613021 00:12:25.757 18:56:40 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=1614359 00:12:25.757 18:56:40 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 1614359' 00:12:25.757 Process error testing pid: 1614359 00:12:25.757 18:56:40 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:12:25.757 18:56:40 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 1614359 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # '[' -z 1614359 ']' 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:25.757 18:56:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.015 [2024-06-10 18:56:40.537624] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:26.015 [2024-06-10 18:56:40.537685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614359 ] 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.015 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:26.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:26.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:26.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:26.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:26.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:26.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:26.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:26.016 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:26.016 [2024-06-10 18:56:40.662155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.016 [2024-06-10 18:56:40.747677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@863 -- # return 0 00:12:26.950 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.950 Dev_1 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.950 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_1 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.950 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.950 [ 00:12:26.950 { 00:12:26.950 "name": "Dev_1", 00:12:26.950 "aliases": [ 00:12:26.951 "a8e06186-2553-4edd-8217-7b28c5aa36ea" 00:12:26.951 ], 00:12:26.951 "product_name": "Malloc disk", 00:12:26.951 "block_size": 512, 00:12:26.951 "num_blocks": 262144, 00:12:26.951 "uuid": "a8e06186-2553-4edd-8217-7b28c5aa36ea", 00:12:26.951 "assigned_rate_limits": { 00:12:26.951 "rw_ios_per_sec": 0, 00:12:26.951 "rw_mbytes_per_sec": 0, 00:12:26.951 "r_mbytes_per_sec": 0, 00:12:26.951 "w_mbytes_per_sec": 0 00:12:26.951 }, 00:12:26.951 "claimed": false, 00:12:26.951 "zoned": false, 00:12:26.951 "supported_io_types": { 00:12:26.951 "read": true, 00:12:26.951 "write": true, 00:12:26.951 "unmap": true, 00:12:26.951 "write_zeroes": true, 00:12:26.951 "flush": true, 00:12:26.951 "reset": true, 00:12:26.951 "compare": false, 00:12:26.951 "compare_and_write": false, 00:12:26.951 "abort": true, 00:12:26.951 "nvme_admin": false, 00:12:26.951 "nvme_io": false 00:12:26.951 }, 00:12:26.951 "memory_domains": [ 00:12:26.951 { 00:12:26.951 "dma_device_id": "system", 00:12:26.951 "dma_device_type": 1 00:12:26.951 }, 00:12:26.951 { 00:12:26.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.951 "dma_device_type": 2 00:12:26.951 } 00:12:26.951 ], 00:12:26.951 "driver_specific": {} 00:12:26.951 } 00:12:26.951 ] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:12:26.951 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 true 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 Dev_2 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_2 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 [ 00:12:26.951 { 00:12:26.951 "name": "Dev_2", 00:12:26.951 "aliases": [ 00:12:26.951 "8ddc5478-3f9d-4251-9515-66a173f2959b" 00:12:26.951 ], 00:12:26.951 "product_name": "Malloc disk", 00:12:26.951 "block_size": 512, 00:12:26.951 "num_blocks": 262144, 00:12:26.951 "uuid": "8ddc5478-3f9d-4251-9515-66a173f2959b", 00:12:26.951 "assigned_rate_limits": { 00:12:26.951 "rw_ios_per_sec": 0, 00:12:26.951 "rw_mbytes_per_sec": 0, 00:12:26.951 "r_mbytes_per_sec": 0, 00:12:26.951 "w_mbytes_per_sec": 0 00:12:26.951 }, 00:12:26.951 "claimed": false, 00:12:26.951 "zoned": false, 00:12:26.951 "supported_io_types": { 00:12:26.951 "read": true, 00:12:26.951 "write": true, 00:12:26.951 "unmap": true, 00:12:26.951 "write_zeroes": true, 00:12:26.951 "flush": true, 00:12:26.951 "reset": true, 00:12:26.951 "compare": false, 00:12:26.951 "compare_and_write": false, 00:12:26.951 "abort": true, 00:12:26.951 "nvme_admin": false, 00:12:26.951 "nvme_io": false 00:12:26.951 }, 00:12:26.951 "memory_domains": [ 00:12:26.951 { 00:12:26.951 "dma_device_id": "system", 00:12:26.951 "dma_device_type": 1 00:12:26.951 }, 00:12:26.951 { 00:12:26.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.951 "dma_device_type": 2 00:12:26.951 } 00:12:26.951 ], 00:12:26.951 "driver_specific": {} 00:12:26.951 } 00:12:26.951 ] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:12:26.951 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.951 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 1614359 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@649 -- # local es=0 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1614359 00:12:26.951 18:56:41 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@637 -- # local arg=wait 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # type -t wait 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:26.951 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # wait 1614359 00:12:26.951 Running I/O for 5 seconds... 00:12:26.951 task offset: 249600 on job bdev=EE_Dev_1 fails 00:12:26.951 00:12:26.951 Latency(us) 00:12:26.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.951 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:26.951 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:12:26.951 EE_Dev_1 : 0.00 32738.10 127.88 7440.48 0.00 332.70 118.78 589.82 00:12:26.951 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:26.951 Dev_2 : 0.00 19962.57 77.98 0.00 0.00 601.78 113.87 1120.67 00:12:26.951 =================================================================================================================== 00:12:26.951 Total : 52700.67 205.86 7440.48 0.00 478.64 113.87 1120.67 00:12:26.951 [2024-06-10 18:56:41.690852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:26.951 request: 00:12:26.951 { 00:12:26.951 "method": "perform_tests", 00:12:26.951 "req_id": 1 00:12:26.951 } 00:12:26.951 Got JSON-RPC error response 00:12:26.951 response: 00:12:26.951 { 00:12:26.951 "code": -32603, 00:12:26.951 "message": "bdevperf failed with error Operation not permitted" 00:12:26.951 } 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # es=255 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # es=127 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # case "$es" in 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@669 -- # es=1 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:27.210 00:12:27.210 real 0m8.838s 00:12:27.210 user 0m9.158s 00:12:27.210 sys 0m0.830s 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:27.210 18:56:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:27.210 ************************************ 00:12:27.210 END TEST bdev_error 00:12:27.210 ************************************ 00:12:27.469 18:56:41 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:12:27.469 18:56:41 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:27.469 18:56:41 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:27.469 18:56:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.469 ************************************ 00:12:27.469 START TEST bdev_stat 00:12:27.469 ************************************ 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # stat_test_suite '' 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=1614644 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 1614644' 00:12:27.469 Process Bdev IO statistics testing pid: 1614644 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 1614644 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@830 -- # '[' -z 1614644 ']' 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:27.469 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:27.469 [2024-06-10 18:56:42.084864] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:27.469 [2024-06-10 18:56:42.084922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614644 ] 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:27.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.469 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:27.469 [2024-06-10 18:56:42.219698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.728 [2024-06-10 18:56:42.302964] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.728 [2024-06-10 18:56:42.302968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.294 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:28.294 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@863 -- # return 0 00:12:28.294 18:56:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:12:28.294 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.294 18:56:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:28.294 Malloc_STAT 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_STAT 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local i 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:28.294 [ 00:12:28.294 { 00:12:28.294 "name": "Malloc_STAT", 00:12:28.294 "aliases": [ 00:12:28.294 "5c298c52-3931-41a5-b266-2464c10487af" 00:12:28.294 ], 00:12:28.294 "product_name": "Malloc disk", 00:12:28.294 "block_size": 512, 00:12:28.294 "num_blocks": 262144, 00:12:28.294 "uuid": "5c298c52-3931-41a5-b266-2464c10487af", 00:12:28.294 "assigned_rate_limits": { 00:12:28.294 "rw_ios_per_sec": 0, 00:12:28.294 "rw_mbytes_per_sec": 0, 00:12:28.294 "r_mbytes_per_sec": 0, 00:12:28.294 "w_mbytes_per_sec": 0 00:12:28.294 }, 00:12:28.294 "claimed": false, 00:12:28.294 "zoned": false, 00:12:28.294 "supported_io_types": { 00:12:28.294 "read": true, 00:12:28.294 "write": true, 00:12:28.294 "unmap": true, 00:12:28.294 "write_zeroes": true, 00:12:28.294 "flush": true, 00:12:28.294 "reset": true, 00:12:28.294 "compare": false, 00:12:28.294 "compare_and_write": false, 00:12:28.294 "abort": true, 00:12:28.294 "nvme_admin": false, 00:12:28.294 "nvme_io": false 00:12:28.294 }, 00:12:28.294 "memory_domains": [ 00:12:28.294 { 00:12:28.294 "dma_device_id": "system", 00:12:28.294 "dma_device_type": 1 00:12:28.294 }, 00:12:28.294 { 00:12:28.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.294 "dma_device_type": 2 00:12:28.294 } 00:12:28.294 ], 00:12:28.294 "driver_specific": {} 00:12:28.294 } 00:12:28.294 ] 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # return 0 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:12:28.294 18:56:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.553 Running I/O for 10 seconds... 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:12:30.452 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:12:30.453 "tick_rate": 2500000000, 00:12:30.453 "ticks": 4619355177486156, 00:12:30.453 "bdevs": [ 00:12:30.453 { 00:12:30.453 "name": "Malloc_STAT", 00:12:30.453 "bytes_read": 812691968, 00:12:30.453 "num_read_ops": 198404, 00:12:30.453 "bytes_written": 0, 00:12:30.453 "num_write_ops": 0, 00:12:30.453 "bytes_unmapped": 0, 00:12:30.453 "num_unmap_ops": 0, 00:12:30.453 "bytes_copied": 0, 00:12:30.453 "num_copy_ops": 0, 00:12:30.453 "read_latency_ticks": 2437132124658, 00:12:30.453 "max_read_latency_ticks": 14950230, 00:12:30.453 "min_read_latency_ticks": 259094, 00:12:30.453 "write_latency_ticks": 0, 00:12:30.453 "max_write_latency_ticks": 0, 00:12:30.453 "min_write_latency_ticks": 0, 00:12:30.453 "unmap_latency_ticks": 0, 00:12:30.453 "max_unmap_latency_ticks": 0, 00:12:30.453 "min_unmap_latency_ticks": 0, 00:12:30.453 "copy_latency_ticks": 0, 00:12:30.453 "max_copy_latency_ticks": 0, 00:12:30.453 "min_copy_latency_ticks": 0, 00:12:30.453 "io_error": {} 00:12:30.453 } 00:12:30.453 ] 00:12:30.453 }' 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=198404 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:12:30.453 "tick_rate": 2500000000, 00:12:30.453 "ticks": 4619355351421732, 00:12:30.453 "name": "Malloc_STAT", 00:12:30.453 "channels": [ 00:12:30.453 { 00:12:30.453 "thread_id": 2, 00:12:30.453 "bytes_read": 418381824, 00:12:30.453 "num_read_ops": 102144, 00:12:30.453 "bytes_written": 0, 00:12:30.453 "num_write_ops": 0, 00:12:30.453 "bytes_unmapped": 0, 00:12:30.453 "num_unmap_ops": 0, 00:12:30.453 "bytes_copied": 0, 00:12:30.453 "num_copy_ops": 0, 00:12:30.453 "read_latency_ticks": 1261706881514, 00:12:30.453 "max_read_latency_ticks": 13579734, 00:12:30.453 "min_read_latency_ticks": 7641366, 00:12:30.453 "write_latency_ticks": 0, 00:12:30.453 "max_write_latency_ticks": 0, 00:12:30.453 "min_write_latency_ticks": 0, 00:12:30.453 "unmap_latency_ticks": 0, 00:12:30.453 "max_unmap_latency_ticks": 0, 00:12:30.453 "min_unmap_latency_ticks": 0, 00:12:30.453 "copy_latency_ticks": 0, 00:12:30.453 "max_copy_latency_ticks": 0, 00:12:30.453 "min_copy_latency_ticks": 0 00:12:30.453 }, 00:12:30.453 { 00:12:30.453 "thread_id": 3, 00:12:30.453 "bytes_read": 423624704, 00:12:30.453 "num_read_ops": 103424, 00:12:30.453 "bytes_written": 0, 00:12:30.453 "num_write_ops": 0, 00:12:30.453 "bytes_unmapped": 0, 00:12:30.453 "num_unmap_ops": 0, 00:12:30.453 "bytes_copied": 0, 00:12:30.453 "num_copy_ops": 0, 00:12:30.453 "read_latency_ticks": 1263832344646, 00:12:30.453 "max_read_latency_ticks": 14950230, 00:12:30.453 "min_read_latency_ticks": 7666614, 00:12:30.453 "write_latency_ticks": 0, 00:12:30.453 "max_write_latency_ticks": 0, 00:12:30.453 "min_write_latency_ticks": 0, 00:12:30.453 "unmap_latency_ticks": 0, 00:12:30.453 "max_unmap_latency_ticks": 0, 00:12:30.453 "min_unmap_latency_ticks": 0, 00:12:30.453 "copy_latency_ticks": 0, 00:12:30.453 "max_copy_latency_ticks": 0, 00:12:30.453 "min_copy_latency_ticks": 0 00:12:30.453 } 00:12:30.453 ] 00:12:30.453 }' 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=102144 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=102144 00:12:30.453 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=103424 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=205568 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:12:30.712 "tick_rate": 2500000000, 00:12:30.712 "ticks": 4619355649068102, 00:12:30.712 "bdevs": [ 00:12:30.712 { 00:12:30.712 "name": "Malloc_STAT", 00:12:30.712 "bytes_read": 892383744, 00:12:30.712 "num_read_ops": 217860, 00:12:30.712 "bytes_written": 0, 00:12:30.712 "num_write_ops": 0, 00:12:30.712 "bytes_unmapped": 0, 00:12:30.712 "num_unmap_ops": 0, 00:12:30.712 "bytes_copied": 0, 00:12:30.712 "num_copy_ops": 0, 00:12:30.712 "read_latency_ticks": 2677058471666, 00:12:30.712 "max_read_latency_ticks": 14950230, 00:12:30.712 "min_read_latency_ticks": 259094, 00:12:30.712 "write_latency_ticks": 0, 00:12:30.712 "max_write_latency_ticks": 0, 00:12:30.712 "min_write_latency_ticks": 0, 00:12:30.712 "unmap_latency_ticks": 0, 00:12:30.712 "max_unmap_latency_ticks": 0, 00:12:30.712 "min_unmap_latency_ticks": 0, 00:12:30.712 "copy_latency_ticks": 0, 00:12:30.712 "max_copy_latency_ticks": 0, 00:12:30.712 "min_copy_latency_ticks": 0, 00:12:30.712 "io_error": {} 00:12:30.712 } 00:12:30.712 ] 00:12:30.712 }' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=217860 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 205568 -lt 198404 ']' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 205568 -gt 217860 ']' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:30.712 00:12:30.712 Latency(us) 00:12:30.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.712 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:30.712 Malloc_STAT : 2.17 51711.40 202.00 0.00 0.00 4938.80 1572.86 5452.60 00:12:30.712 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:30.712 Malloc_STAT : 2.17 52291.10 204.26 0.00 0.00 4884.79 1284.51 6003.10 00:12:30.712 =================================================================================================================== 00:12:30.712 Total : 104002.50 406.26 0.00 0.00 4911.64 1284.51 6003.10 00:12:30.712 0 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 1614644 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@949 -- # '[' -z 1614644 ']' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # kill -0 1614644 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # uname 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1614644 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1614644' 00:12:30.712 killing process with pid 1614644 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # kill 1614644 00:12:30.712 Received shutdown signal, test time was about 2.255680 seconds 00:12:30.712 00:12:30.712 Latency(us) 00:12:30.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.712 =================================================================================================================== 00:12:30.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.712 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # wait 1614644 00:12:30.971 18:56:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:12:30.971 00:12:30.971 real 0m3.578s 00:12:30.971 user 0m7.153s 00:12:30.971 sys 0m0.470s 00:12:30.971 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:30.971 18:56:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:30.971 ************************************ 00:12:30.971 END TEST bdev_stat 00:12:30.971 ************************************ 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:12:30.971 18:56:45 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:12:30.971 00:12:30.971 real 1m53.898s 00:12:30.971 user 7m22.157s 00:12:30.971 sys 0m22.052s 00:12:30.971 18:56:45 blockdev_general -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:30.971 18:56:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:30.971 ************************************ 00:12:30.971 END TEST blockdev_general 00:12:30.971 ************************************ 00:12:30.971 18:56:45 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:12:30.971 18:56:45 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:30.971 18:56:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:30.971 18:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.229 ************************************ 00:12:31.229 START TEST bdev_raid 00:12:31.229 ************************************ 00:12:31.229 18:56:45 bdev_raid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:12:31.229 * Looking for test storage... 00:12:31.229 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:12:31.229 18:56:45 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:12:31.229 18:56:45 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:12:31.229 18:56:45 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:31.229 18:56:45 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:31.229 18:56:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.229 ************************************ 00:12:31.229 START TEST raid_function_test_raid0 00:12:31.229 ************************************ 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # raid_function_test raid0 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=1615388 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1615388' 00:12:31.229 Process raid pid: 1615388 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 1615388 /var/tmp/spdk-raid.sock 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@830 -- # '[' -z 1615388 ']' 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:31.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:31.229 18:56:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:31.229 [2024-06-10 18:56:45.968933] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:31.229 [2024-06-10 18:56:45.968991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:31.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.488 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:31.488 [2024-06-10 18:56:46.103266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.488 [2024-06-10 18:56:46.189837] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.746 [2024-06-10 18:56:46.254971] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.746 [2024-06-10 18:56:46.255003] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@863 -- # return 0 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:12:32.312 18:56:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:12:32.570 [2024-06-10 18:56:47.095441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:32.570 [2024-06-10 18:56:47.096727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:32.570 [2024-06-10 18:56:47.096778] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dbfa50 00:12:32.570 [2024-06-10 18:56:47.096787] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:32.570 [2024-06-10 18:56:47.096951] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c22cf0 00:12:32.570 [2024-06-10 18:56:47.097053] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dbfa50 00:12:32.570 [2024-06-10 18:56:47.097063] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x1dbfa50 00:12:32.570 [2024-06-10 18:56:47.097155] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.570 Base_1 00:12:32.570 Base_2 00:12:32.570 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:12:32.570 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:32.570 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.828 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:12:32.828 [2024-06-10 18:56:47.568697] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1daf800 00:12:32.828 /dev/nbd0 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local i 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # break 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:12:33.085 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.086 1+0 records in 00:12:33.086 1+0 records out 00:12:33.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262242 s, 15.6 MB/s 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # size=4096 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # return 0 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:33.086 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:33.343 { 00:12:33.343 "nbd_device": "/dev/nbd0", 00:12:33.343 "bdev_name": "raid" 00:12:33.343 } 00:12:33.343 ]' 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:33.343 { 00:12:33.343 "nbd_device": "/dev/nbd0", 00:12:33.343 "bdev_name": "raid" 00:12:33.343 } 00:12:33.343 ]' 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:33.343 4096+0 records in 00:12:33.343 4096+0 records out 00:12:33.343 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0301283 s, 69.6 MB/s 00:12:33.343 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:33.601 4096+0 records in 00:12:33.601 4096+0 records out 00:12:33.601 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.274125 s, 7.7 MB/s 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:33.601 128+0 records in 00:12:33.601 128+0 records out 00:12:33.601 65536 bytes (66 kB, 64 KiB) copied, 0.000824865 s, 79.5 MB/s 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:33.601 2035+0 records in 00:12:33.601 2035+0 records out 00:12:33.601 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00915732 s, 114 MB/s 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:33.601 456+0 records in 00:12:33.601 456+0 records out 00:12:33.601 233472 bytes (233 kB, 228 KiB) copied, 0.00267594 s, 87.2 MB/s 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.601 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:12:33.859 [2024-06-10 18:56:48.566590] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:33.859 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 1615388 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@949 -- # '[' -z 1615388 ']' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # kill -0 1615388 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # uname 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:34.116 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1615388 00:12:34.374 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:34.374 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:34.375 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1615388' 00:12:34.375 killing process with pid 1615388 00:12:34.375 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # kill 1615388 00:12:34.375 [2024-06-10 18:56:48.922189] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.375 [2024-06-10 18:56:48.922241] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.375 [2024-06-10 18:56:48.922275] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.375 [2024-06-10 18:56:48.922286] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dbfa50 name raid, state offline 00:12:34.375 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # wait 1615388 00:12:34.375 [2024-06-10 18:56:48.937759] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.375 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:12:34.375 00:12:34.375 real 0m3.217s 00:12:34.375 user 0m4.188s 00:12:34.375 sys 0m1.220s 00:12:34.375 18:56:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:34.375 18:56:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:34.375 ************************************ 00:12:34.375 END TEST raid_function_test_raid0 00:12:34.375 ************************************ 00:12:34.633 18:56:49 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:12:34.633 18:56:49 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:34.633 18:56:49 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:34.633 18:56:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.633 ************************************ 00:12:34.633 START TEST raid_function_test_concat 00:12:34.633 ************************************ 00:12:34.633 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # raid_function_test concat 00:12:34.633 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:12:34.633 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=1616093 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1616093' 00:12:34.634 Process raid pid: 1616093 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 1616093 /var/tmp/spdk-raid.sock 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@830 -- # '[' -z 1616093 ']' 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:34.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:34.634 18:56:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:34.634 [2024-06-10 18:56:49.271912] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:34.634 [2024-06-10 18:56:49.271971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:34.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.634 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:34.891 [2024-06-10 18:56:49.405946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.891 [2024-06-10 18:56:49.492948] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.891 [2024-06-10 18:56:49.551692] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.891 [2024-06-10 18:56:49.551727] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@863 -- # return 0 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:12:35.456 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:12:35.714 [2024-06-10 18:56:50.409037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:35.714 [2024-06-10 18:56:50.410308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:35.714 [2024-06-10 18:56:50.410361] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xae4a50 00:12:35.714 [2024-06-10 18:56:50.410371] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:35.714 [2024-06-10 18:56:50.410538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x947cf0 00:12:35.714 [2024-06-10 18:56:50.410656] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xae4a50 00:12:35.714 [2024-06-10 18:56:50.410666] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0xae4a50 00:12:35.714 [2024-06-10 18:56:50.410760] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.714 Base_1 00:12:35.714 Base_2 00:12:35.714 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:12:35.714 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:35.714 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.971 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:12:35.971 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:12:35.971 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:12:35.971 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:35.971 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.972 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:12:36.229 [2024-06-10 18:56:50.870277] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xad4830 00:12:36.229 /dev/nbd0 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local i 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # break 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.229 1+0 records in 00:12:36.229 1+0 records out 00:12:36.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259683 s, 15.8 MB/s 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # size=4096 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # return 0 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:36.229 18:56:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:36.487 { 00:12:36.487 "nbd_device": "/dev/nbd0", 00:12:36.487 "bdev_name": "raid" 00:12:36.487 } 00:12:36.487 ]' 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:36.487 { 00:12:36.487 "nbd_device": "/dev/nbd0", 00:12:36.487 "bdev_name": "raid" 00:12:36.487 } 00:12:36.487 ]' 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:12:36.487 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:36.745 4096+0 records in 00:12:36.745 4096+0 records out 00:12:36.745 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0298727 s, 70.2 MB/s 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:36.745 4096+0 records in 00:12:36.745 4096+0 records out 00:12:36.745 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.187743 s, 11.2 MB/s 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:36.745 128+0 records in 00:12:36.745 128+0 records out 00:12:36.745 65536 bytes (66 kB, 64 KiB) copied, 0.000828474 s, 79.1 MB/s 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:12:36.745 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:37.004 2035+0 records in 00:12:37.004 2035+0 records out 00:12:37.004 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0105102 s, 99.1 MB/s 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:37.004 456+0 records in 00:12:37.004 456+0 records out 00:12:37.004 233472 bytes (233 kB, 228 KiB) copied, 0.00271723 s, 85.9 MB/s 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.004 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:12:37.262 [2024-06-10 18:56:51.798071] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:37.262 18:56:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 1616093 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@949 -- # '[' -z 1616093 ']' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # kill -0 1616093 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # uname 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1616093 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1616093' 00:12:37.520 killing process with pid 1616093 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # kill 1616093 00:12:37.520 [2024-06-10 18:56:52.157504] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.520 [2024-06-10 18:56:52.157564] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.520 [2024-06-10 18:56:52.157608] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.520 [2024-06-10 18:56:52.157620] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xae4a50 name raid, state offline 00:12:37.520 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # wait 1616093 00:12:37.520 [2024-06-10 18:56:52.173379] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.779 18:56:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:12:37.779 00:12:37.779 real 0m3.149s 00:12:37.779 user 0m4.234s 00:12:37.779 sys 0m1.129s 00:12:37.779 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:37.779 18:56:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:37.779 ************************************ 00:12:37.779 END TEST raid_function_test_concat 00:12:37.779 ************************************ 00:12:37.779 18:56:52 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:12:37.779 18:56:52 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:37.779 18:56:52 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:37.779 18:56:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.779 ************************************ 00:12:37.779 START TEST raid0_resize_test 00:12:37.779 ************************************ 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # raid0_resize_test 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=1616763 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 1616763' 00:12:37.779 Process raid pid: 1616763 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 1616763 /var/tmp/spdk-raid.sock 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@830 -- # '[' -z 1616763 ']' 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:37.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:37.779 18:56:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.779 [2024-06-10 18:56:52.507717] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:37.779 [2024-06-10 18:56:52.507774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:38.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.038 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:38.038 [2024-06-10 18:56:52.641855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.038 [2024-06-10 18:56:52.728504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.038 [2024-06-10 18:56:52.785688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.038 [2024-06-10 18:56:52.785722] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.974 18:56:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:38.974 18:56:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@863 -- # return 0 00:12:38.974 18:56:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:12:38.974 Base_1 00:12:38.974 18:56:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:12:39.233 Base_2 00:12:39.233 18:56:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:12:39.492 [2024-06-10 18:56:54.049565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:39.492 [2024-06-10 18:56:54.050891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:39.492 [2024-06-10 18:56:54.050935] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23f0e80 00:12:39.492 [2024-06-10 18:56:54.050945] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:39.492 [2024-06-10 18:56:54.051124] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23aa4e0 00:12:39.492 [2024-06-10 18:56:54.051205] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23f0e80 00:12:39.492 [2024-06-10 18:56:54.051214] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x23f0e80 00:12:39.492 [2024-06-10 18:56:54.051306] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.492 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:12:39.752 [2024-06-10 18:56:54.274153] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:39.752 [2024-06-10 18:56:54.274176] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:39.752 true 00:12:39.752 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:39.752 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:12:39.752 [2024-06-10 18:56:54.486825] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.752 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:12:39.752 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:12:39.752 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:12:39.752 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:12:40.010 [2024-06-10 18:56:54.711288] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:40.011 [2024-06-10 18:56:54.711305] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:40.011 [2024-06-10 18:56:54.711328] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:12:40.011 true 00:12:40.011 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:40.011 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:12:40.269 [2024-06-10 18:56:54.935999] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 1616763 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@949 -- # '[' -z 1616763 ']' 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # kill -0 1616763 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # uname 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:40.269 18:56:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1616763 00:12:40.269 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:40.269 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:40.269 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1616763' 00:12:40.269 killing process with pid 1616763 00:12:40.269 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # kill 1616763 00:12:40.269 [2024-06-10 18:56:55.015362] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.269 [2024-06-10 18:56:55.015412] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.269 [2024-06-10 18:56:55.015450] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.269 [2024-06-10 18:56:55.015460] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f0e80 name Raid, state offline 00:12:40.269 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # wait 1616763 00:12:40.269 [2024-06-10 18:56:55.016669] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.528 18:56:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:12:40.528 00:12:40.528 real 0m2.746s 00:12:40.528 user 0m4.194s 00:12:40.528 sys 0m0.609s 00:12:40.528 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:40.528 18:56:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.528 ************************************ 00:12:40.528 END TEST raid0_resize_test 00:12:40.528 ************************************ 00:12:40.528 18:56:55 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:40.528 18:56:55 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:40.528 18:56:55 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:40.528 18:56:55 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:40.528 18:56:55 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:40.528 18:56:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.528 ************************************ 00:12:40.528 START TEST raid_state_function_test 00:12:40.528 ************************************ 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 2 false 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:40.528 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1617184 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1617184' 00:12:40.787 Process raid pid: 1617184 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1617184 /var/tmp/spdk-raid.sock 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1617184 ']' 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:40.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:40.787 18:56:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.787 [2024-06-10 18:56:55.348060] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:40.787 [2024-06-10 18:56:55.348117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.787 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:40.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.787 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:40.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:40.788 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:40.788 [2024-06-10 18:56:55.483053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.046 [2024-06-10 18:56:55.570434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.046 [2024-06-10 18:56:55.635433] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.046 [2024-06-10 18:56:55.635467] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.612 18:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:41.612 18:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:12:41.612 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:41.870 [2024-06-10 18:56:56.451048] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.870 [2024-06-10 18:56:56.451088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.870 [2024-06-10 18:56:56.451098] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.870 [2024-06-10 18:56:56.451109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.870 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.128 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.128 "name": "Existed_Raid", 00:12:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.128 "strip_size_kb": 64, 00:12:42.128 "state": "configuring", 00:12:42.128 "raid_level": "raid0", 00:12:42.128 "superblock": false, 00:12:42.128 "num_base_bdevs": 2, 00:12:42.128 "num_base_bdevs_discovered": 0, 00:12:42.128 "num_base_bdevs_operational": 2, 00:12:42.128 "base_bdevs_list": [ 00:12:42.128 { 00:12:42.128 "name": "BaseBdev1", 00:12:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.128 "is_configured": false, 00:12:42.128 "data_offset": 0, 00:12:42.128 "data_size": 0 00:12:42.128 }, 00:12:42.128 { 00:12:42.128 "name": "BaseBdev2", 00:12:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.128 "is_configured": false, 00:12:42.128 "data_offset": 0, 00:12:42.128 "data_size": 0 00:12:42.128 } 00:12:42.128 ] 00:12:42.128 }' 00:12:42.128 18:56:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.128 18:56:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.694 18:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:42.694 [2024-06-10 18:56:57.441534] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.694 [2024-06-10 18:56:57.441563] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf61f10 name Existed_Raid, state configuring 00:12:42.953 18:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:42.953 [2024-06-10 18:56:57.650091] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.953 [2024-06-10 18:56:57.650115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.953 [2024-06-10 18:56:57.650124] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.953 [2024-06-10 18:56:57.650135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.953 18:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.211 [2024-06-10 18:56:57.884035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.211 BaseBdev1 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:43.211 18:56:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.469 18:56:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.728 [ 00:12:43.728 { 00:12:43.728 "name": "BaseBdev1", 00:12:43.728 "aliases": [ 00:12:43.728 "ca9951c6-1231-4cce-99db-4b6fb5b52de4" 00:12:43.728 ], 00:12:43.728 "product_name": "Malloc disk", 00:12:43.728 "block_size": 512, 00:12:43.728 "num_blocks": 65536, 00:12:43.728 "uuid": "ca9951c6-1231-4cce-99db-4b6fb5b52de4", 00:12:43.728 "assigned_rate_limits": { 00:12:43.728 "rw_ios_per_sec": 0, 00:12:43.728 "rw_mbytes_per_sec": 0, 00:12:43.728 "r_mbytes_per_sec": 0, 00:12:43.728 "w_mbytes_per_sec": 0 00:12:43.728 }, 00:12:43.728 "claimed": true, 00:12:43.728 "claim_type": "exclusive_write", 00:12:43.728 "zoned": false, 00:12:43.728 "supported_io_types": { 00:12:43.728 "read": true, 00:12:43.728 "write": true, 00:12:43.728 "unmap": true, 00:12:43.728 "write_zeroes": true, 00:12:43.728 "flush": true, 00:12:43.728 "reset": true, 00:12:43.728 "compare": false, 00:12:43.728 "compare_and_write": false, 00:12:43.728 "abort": true, 00:12:43.728 "nvme_admin": false, 00:12:43.728 "nvme_io": false 00:12:43.728 }, 00:12:43.728 "memory_domains": [ 00:12:43.728 { 00:12:43.728 "dma_device_id": "system", 00:12:43.728 "dma_device_type": 1 00:12:43.728 }, 00:12:43.728 { 00:12:43.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.728 "dma_device_type": 2 00:12:43.728 } 00:12:43.728 ], 00:12:43.728 "driver_specific": {} 00:12:43.728 } 00:12:43.728 ] 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.728 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.990 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.990 "name": "Existed_Raid", 00:12:43.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.990 "strip_size_kb": 64, 00:12:43.990 "state": "configuring", 00:12:43.990 "raid_level": "raid0", 00:12:43.990 "superblock": false, 00:12:43.990 "num_base_bdevs": 2, 00:12:43.990 "num_base_bdevs_discovered": 1, 00:12:43.990 "num_base_bdevs_operational": 2, 00:12:43.990 "base_bdevs_list": [ 00:12:43.990 { 00:12:43.990 "name": "BaseBdev1", 00:12:43.990 "uuid": "ca9951c6-1231-4cce-99db-4b6fb5b52de4", 00:12:43.990 "is_configured": true, 00:12:43.990 "data_offset": 0, 00:12:43.990 "data_size": 65536 00:12:43.990 }, 00:12:43.990 { 00:12:43.990 "name": "BaseBdev2", 00:12:43.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.990 "is_configured": false, 00:12:43.990 "data_offset": 0, 00:12:43.990 "data_size": 0 00:12:43.990 } 00:12:43.990 ] 00:12:43.990 }' 00:12:43.990 18:56:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.990 18:56:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.626 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:44.626 [2024-06-10 18:56:59.347900] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.626 [2024-06-10 18:56:59.347936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf61800 name Existed_Raid, state configuring 00:12:44.626 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:44.884 [2024-06-10 18:56:59.576518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.884 [2024-06-10 18:56:59.577861] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.884 [2024-06-10 18:56:59.577892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.884 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.142 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.142 "name": "Existed_Raid", 00:12:45.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.142 "strip_size_kb": 64, 00:12:45.142 "state": "configuring", 00:12:45.142 "raid_level": "raid0", 00:12:45.142 "superblock": false, 00:12:45.142 "num_base_bdevs": 2, 00:12:45.142 "num_base_bdevs_discovered": 1, 00:12:45.142 "num_base_bdevs_operational": 2, 00:12:45.142 "base_bdevs_list": [ 00:12:45.142 { 00:12:45.142 "name": "BaseBdev1", 00:12:45.142 "uuid": "ca9951c6-1231-4cce-99db-4b6fb5b52de4", 00:12:45.142 "is_configured": true, 00:12:45.142 "data_offset": 0, 00:12:45.142 "data_size": 65536 00:12:45.142 }, 00:12:45.142 { 00:12:45.142 "name": "BaseBdev2", 00:12:45.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.142 "is_configured": false, 00:12:45.142 "data_offset": 0, 00:12:45.142 "data_size": 0 00:12:45.142 } 00:12:45.142 ] 00:12:45.142 }' 00:12:45.142 18:56:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.142 18:56:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.707 18:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.965 [2024-06-10 18:57:00.646430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.965 [2024-06-10 18:57:00.646460] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf625f0 00:12:45.965 [2024-06-10 18:57:00.646467] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:45.965 [2024-06-10 18:57:00.646701] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11143b0 00:12:45.965 [2024-06-10 18:57:00.646806] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf625f0 00:12:45.965 [2024-06-10 18:57:00.646815] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf625f0 00:12:45.965 [2024-06-10 18:57:00.646961] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.965 BaseBdev2 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:45.965 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.223 18:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.482 [ 00:12:46.482 { 00:12:46.482 "name": "BaseBdev2", 00:12:46.482 "aliases": [ 00:12:46.482 "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15" 00:12:46.482 ], 00:12:46.482 "product_name": "Malloc disk", 00:12:46.482 "block_size": 512, 00:12:46.482 "num_blocks": 65536, 00:12:46.482 "uuid": "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15", 00:12:46.482 "assigned_rate_limits": { 00:12:46.482 "rw_ios_per_sec": 0, 00:12:46.482 "rw_mbytes_per_sec": 0, 00:12:46.482 "r_mbytes_per_sec": 0, 00:12:46.482 "w_mbytes_per_sec": 0 00:12:46.482 }, 00:12:46.482 "claimed": true, 00:12:46.482 "claim_type": "exclusive_write", 00:12:46.482 "zoned": false, 00:12:46.482 "supported_io_types": { 00:12:46.482 "read": true, 00:12:46.482 "write": true, 00:12:46.482 "unmap": true, 00:12:46.482 "write_zeroes": true, 00:12:46.482 "flush": true, 00:12:46.482 "reset": true, 00:12:46.482 "compare": false, 00:12:46.482 "compare_and_write": false, 00:12:46.482 "abort": true, 00:12:46.482 "nvme_admin": false, 00:12:46.482 "nvme_io": false 00:12:46.482 }, 00:12:46.482 "memory_domains": [ 00:12:46.482 { 00:12:46.482 "dma_device_id": "system", 00:12:46.482 "dma_device_type": 1 00:12:46.482 }, 00:12:46.482 { 00:12:46.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.482 "dma_device_type": 2 00:12:46.482 } 00:12:46.482 ], 00:12:46.482 "driver_specific": {} 00:12:46.482 } 00:12:46.482 ] 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.482 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.740 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.740 "name": "Existed_Raid", 00:12:46.740 "uuid": "7d05da76-3861-417d-a633-d2ac4061fef4", 00:12:46.740 "strip_size_kb": 64, 00:12:46.740 "state": "online", 00:12:46.740 "raid_level": "raid0", 00:12:46.740 "superblock": false, 00:12:46.740 "num_base_bdevs": 2, 00:12:46.740 "num_base_bdevs_discovered": 2, 00:12:46.740 "num_base_bdevs_operational": 2, 00:12:46.740 "base_bdevs_list": [ 00:12:46.740 { 00:12:46.740 "name": "BaseBdev1", 00:12:46.740 "uuid": "ca9951c6-1231-4cce-99db-4b6fb5b52de4", 00:12:46.740 "is_configured": true, 00:12:46.740 "data_offset": 0, 00:12:46.740 "data_size": 65536 00:12:46.740 }, 00:12:46.740 { 00:12:46.740 "name": "BaseBdev2", 00:12:46.740 "uuid": "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15", 00:12:46.740 "is_configured": true, 00:12:46.740 "data_offset": 0, 00:12:46.740 "data_size": 65536 00:12:46.740 } 00:12:46.740 ] 00:12:46.740 }' 00:12:46.740 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.740 18:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:47.305 18:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:47.575 [2024-06-10 18:57:02.122495] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.575 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:47.575 "name": "Existed_Raid", 00:12:47.575 "aliases": [ 00:12:47.575 "7d05da76-3861-417d-a633-d2ac4061fef4" 00:12:47.575 ], 00:12:47.575 "product_name": "Raid Volume", 00:12:47.575 "block_size": 512, 00:12:47.575 "num_blocks": 131072, 00:12:47.575 "uuid": "7d05da76-3861-417d-a633-d2ac4061fef4", 00:12:47.575 "assigned_rate_limits": { 00:12:47.576 "rw_ios_per_sec": 0, 00:12:47.576 "rw_mbytes_per_sec": 0, 00:12:47.576 "r_mbytes_per_sec": 0, 00:12:47.576 "w_mbytes_per_sec": 0 00:12:47.576 }, 00:12:47.576 "claimed": false, 00:12:47.576 "zoned": false, 00:12:47.576 "supported_io_types": { 00:12:47.576 "read": true, 00:12:47.576 "write": true, 00:12:47.576 "unmap": true, 00:12:47.576 "write_zeroes": true, 00:12:47.576 "flush": true, 00:12:47.576 "reset": true, 00:12:47.576 "compare": false, 00:12:47.576 "compare_and_write": false, 00:12:47.576 "abort": false, 00:12:47.576 "nvme_admin": false, 00:12:47.576 "nvme_io": false 00:12:47.576 }, 00:12:47.576 "memory_domains": [ 00:12:47.576 { 00:12:47.576 "dma_device_id": "system", 00:12:47.576 "dma_device_type": 1 00:12:47.576 }, 00:12:47.576 { 00:12:47.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.576 "dma_device_type": 2 00:12:47.576 }, 00:12:47.576 { 00:12:47.576 "dma_device_id": "system", 00:12:47.576 "dma_device_type": 1 00:12:47.576 }, 00:12:47.576 { 00:12:47.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.576 "dma_device_type": 2 00:12:47.576 } 00:12:47.576 ], 00:12:47.576 "driver_specific": { 00:12:47.576 "raid": { 00:12:47.576 "uuid": "7d05da76-3861-417d-a633-d2ac4061fef4", 00:12:47.576 "strip_size_kb": 64, 00:12:47.577 "state": "online", 00:12:47.577 "raid_level": "raid0", 00:12:47.577 "superblock": false, 00:12:47.577 "num_base_bdevs": 2, 00:12:47.577 "num_base_bdevs_discovered": 2, 00:12:47.577 "num_base_bdevs_operational": 2, 00:12:47.577 "base_bdevs_list": [ 00:12:47.577 { 00:12:47.577 "name": "BaseBdev1", 00:12:47.577 "uuid": "ca9951c6-1231-4cce-99db-4b6fb5b52de4", 00:12:47.577 "is_configured": true, 00:12:47.577 "data_offset": 0, 00:12:47.577 "data_size": 65536 00:12:47.577 }, 00:12:47.577 { 00:12:47.577 "name": "BaseBdev2", 00:12:47.577 "uuid": "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15", 00:12:47.577 "is_configured": true, 00:12:47.577 "data_offset": 0, 00:12:47.577 "data_size": 65536 00:12:47.577 } 00:12:47.577 ] 00:12:47.577 } 00:12:47.577 } 00:12:47.577 }' 00:12:47.577 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.577 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:47.577 BaseBdev2' 00:12:47.577 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:47.577 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:47.577 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:47.835 "name": "BaseBdev1", 00:12:47.835 "aliases": [ 00:12:47.835 "ca9951c6-1231-4cce-99db-4b6fb5b52de4" 00:12:47.835 ], 00:12:47.835 "product_name": "Malloc disk", 00:12:47.835 "block_size": 512, 00:12:47.835 "num_blocks": 65536, 00:12:47.835 "uuid": "ca9951c6-1231-4cce-99db-4b6fb5b52de4", 00:12:47.835 "assigned_rate_limits": { 00:12:47.835 "rw_ios_per_sec": 0, 00:12:47.835 "rw_mbytes_per_sec": 0, 00:12:47.835 "r_mbytes_per_sec": 0, 00:12:47.835 "w_mbytes_per_sec": 0 00:12:47.835 }, 00:12:47.835 "claimed": true, 00:12:47.835 "claim_type": "exclusive_write", 00:12:47.835 "zoned": false, 00:12:47.835 "supported_io_types": { 00:12:47.835 "read": true, 00:12:47.835 "write": true, 00:12:47.835 "unmap": true, 00:12:47.835 "write_zeroes": true, 00:12:47.835 "flush": true, 00:12:47.835 "reset": true, 00:12:47.835 "compare": false, 00:12:47.835 "compare_and_write": false, 00:12:47.835 "abort": true, 00:12:47.835 "nvme_admin": false, 00:12:47.835 "nvme_io": false 00:12:47.835 }, 00:12:47.835 "memory_domains": [ 00:12:47.835 { 00:12:47.835 "dma_device_id": "system", 00:12:47.835 "dma_device_type": 1 00:12:47.835 }, 00:12:47.835 { 00:12:47.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.835 "dma_device_type": 2 00:12:47.835 } 00:12:47.835 ], 00:12:47.835 "driver_specific": {} 00:12:47.835 }' 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:47.835 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:48.093 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.351 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.351 "name": "BaseBdev2", 00:12:48.351 "aliases": [ 00:12:48.351 "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15" 00:12:48.351 ], 00:12:48.351 "product_name": "Malloc disk", 00:12:48.351 "block_size": 512, 00:12:48.351 "num_blocks": 65536, 00:12:48.351 "uuid": "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15", 00:12:48.351 "assigned_rate_limits": { 00:12:48.351 "rw_ios_per_sec": 0, 00:12:48.351 "rw_mbytes_per_sec": 0, 00:12:48.351 "r_mbytes_per_sec": 0, 00:12:48.351 "w_mbytes_per_sec": 0 00:12:48.351 }, 00:12:48.351 "claimed": true, 00:12:48.351 "claim_type": "exclusive_write", 00:12:48.351 "zoned": false, 00:12:48.351 "supported_io_types": { 00:12:48.351 "read": true, 00:12:48.351 "write": true, 00:12:48.351 "unmap": true, 00:12:48.351 "write_zeroes": true, 00:12:48.351 "flush": true, 00:12:48.351 "reset": true, 00:12:48.351 "compare": false, 00:12:48.352 "compare_and_write": false, 00:12:48.352 "abort": true, 00:12:48.352 "nvme_admin": false, 00:12:48.352 "nvme_io": false 00:12:48.352 }, 00:12:48.352 "memory_domains": [ 00:12:48.352 { 00:12:48.352 "dma_device_id": "system", 00:12:48.352 "dma_device_type": 1 00:12:48.352 }, 00:12:48.352 { 00:12:48.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.352 "dma_device_type": 2 00:12:48.352 } 00:12:48.352 ], 00:12:48.352 "driver_specific": {} 00:12:48.352 }' 00:12:48.352 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.352 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.352 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.352 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.352 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.609 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:48.867 [2024-06-10 18:57:03.509972] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.867 [2024-06-10 18:57:03.509994] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.867 [2024-06-10 18:57:03.510030] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.867 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:48.867 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.868 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.125 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.125 "name": "Existed_Raid", 00:12:49.125 "uuid": "7d05da76-3861-417d-a633-d2ac4061fef4", 00:12:49.125 "strip_size_kb": 64, 00:12:49.125 "state": "offline", 00:12:49.125 "raid_level": "raid0", 00:12:49.125 "superblock": false, 00:12:49.125 "num_base_bdevs": 2, 00:12:49.125 "num_base_bdevs_discovered": 1, 00:12:49.125 "num_base_bdevs_operational": 1, 00:12:49.125 "base_bdevs_list": [ 00:12:49.126 { 00:12:49.126 "name": null, 00:12:49.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.126 "is_configured": false, 00:12:49.126 "data_offset": 0, 00:12:49.126 "data_size": 65536 00:12:49.126 }, 00:12:49.126 { 00:12:49.126 "name": "BaseBdev2", 00:12:49.126 "uuid": "9c47ad15-ee86-49be-a8ef-6a0ac2dd5a15", 00:12:49.126 "is_configured": true, 00:12:49.126 "data_offset": 0, 00:12:49.126 "data_size": 65536 00:12:49.126 } 00:12:49.126 ] 00:12:49.126 }' 00:12:49.126 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.126 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.690 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:49.690 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:49.690 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.690 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:49.947 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:49.947 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.948 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:50.206 [2024-06-10 18:57:04.758296] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.206 [2024-06-10 18:57:04.758342] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf625f0 name Existed_Raid, state offline 00:12:50.206 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:50.206 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:50.206 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:50.206 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.465 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1617184 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1617184 ']' 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1617184 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1617184 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1617184' 00:12:50.466 killing process with pid 1617184 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1617184 00:12:50.466 [2024-06-10 18:57:05.076482] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.466 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1617184 00:12:50.466 [2024-06-10 18:57:05.077350] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.724 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:50.724 00:12:50.724 real 0m9.988s 00:12:50.724 user 0m17.726s 00:12:50.725 sys 0m1.873s 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 ************************************ 00:12:50.725 END TEST raid_state_function_test 00:12:50.725 ************************************ 00:12:50.725 18:57:05 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:50.725 18:57:05 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:50.725 18:57:05 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:50.725 18:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 ************************************ 00:12:50.725 START TEST raid_state_function_test_sb 00:12:50.725 ************************************ 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 2 true 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1619666 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1619666' 00:12:50.725 Process raid pid: 1619666 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1619666 /var/tmp/spdk-raid.sock 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1619666 ']' 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:50.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:50.725 18:57:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 [2024-06-10 18:57:05.413388] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:12:50.725 [2024-06-10 18:57:05.413443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.0 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.1 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.2 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.3 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.4 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.5 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.6 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:01.7 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.0 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.1 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.2 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.3 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.4 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.5 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.6 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b6:02.7 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.0 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.1 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.2 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.3 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.4 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.5 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.6 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:01.7 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.0 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.1 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.2 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.3 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.4 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.5 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.6 cannot be used 00:12:50.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.984 EAL: Requested device 0000:b8:02.7 cannot be used 00:12:50.984 [2024-06-10 18:57:05.540503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.984 [2024-06-10 18:57:05.626038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.984 [2024-06-10 18:57:05.686592] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.984 [2024-06-10 18:57:05.686620] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:51.919 [2024-06-10 18:57:06.524548] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.919 [2024-06-10 18:57:06.524597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.919 [2024-06-10 18:57:06.524607] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.919 [2024-06-10 18:57:06.524618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.919 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.177 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.177 "name": "Existed_Raid", 00:12:52.177 "uuid": "9f65fc4e-bc7c-4884-9655-94af2a9aaf50", 00:12:52.177 "strip_size_kb": 64, 00:12:52.177 "state": "configuring", 00:12:52.177 "raid_level": "raid0", 00:12:52.177 "superblock": true, 00:12:52.177 "num_base_bdevs": 2, 00:12:52.177 "num_base_bdevs_discovered": 0, 00:12:52.177 "num_base_bdevs_operational": 2, 00:12:52.177 "base_bdevs_list": [ 00:12:52.177 { 00:12:52.177 "name": "BaseBdev1", 00:12:52.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.177 "is_configured": false, 00:12:52.177 "data_offset": 0, 00:12:52.177 "data_size": 0 00:12:52.177 }, 00:12:52.177 { 00:12:52.177 "name": "BaseBdev2", 00:12:52.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.177 "is_configured": false, 00:12:52.177 "data_offset": 0, 00:12:52.177 "data_size": 0 00:12:52.177 } 00:12:52.177 ] 00:12:52.177 }' 00:12:52.177 18:57:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.177 18:57:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.742 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:52.999 [2024-06-10 18:57:07.547117] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.999 [2024-06-10 18:57:07.547147] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dadf10 name Existed_Raid, state configuring 00:12:52.999 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:53.257 [2024-06-10 18:57:07.759691] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.257 [2024-06-10 18:57:07.759717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.257 [2024-06-10 18:57:07.759726] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.257 [2024-06-10 18:57:07.759737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.257 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.257 [2024-06-10 18:57:07.997862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.257 BaseBdev1 00:12:53.257 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:53.257 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:12:53.515 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:53.515 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:12:53.515 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:53.515 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:53.515 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:53.515 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.773 [ 00:12:53.773 { 00:12:53.773 "name": "BaseBdev1", 00:12:53.773 "aliases": [ 00:12:53.773 "aac3df4c-4842-457b-8191-6f23fe74617c" 00:12:53.773 ], 00:12:53.773 "product_name": "Malloc disk", 00:12:53.773 "block_size": 512, 00:12:53.773 "num_blocks": 65536, 00:12:53.773 "uuid": "aac3df4c-4842-457b-8191-6f23fe74617c", 00:12:53.773 "assigned_rate_limits": { 00:12:53.773 "rw_ios_per_sec": 0, 00:12:53.773 "rw_mbytes_per_sec": 0, 00:12:53.773 "r_mbytes_per_sec": 0, 00:12:53.773 "w_mbytes_per_sec": 0 00:12:53.773 }, 00:12:53.773 "claimed": true, 00:12:53.773 "claim_type": "exclusive_write", 00:12:53.773 "zoned": false, 00:12:53.773 "supported_io_types": { 00:12:53.773 "read": true, 00:12:53.773 "write": true, 00:12:53.773 "unmap": true, 00:12:53.773 "write_zeroes": true, 00:12:53.773 "flush": true, 00:12:53.773 "reset": true, 00:12:53.773 "compare": false, 00:12:53.773 "compare_and_write": false, 00:12:53.773 "abort": true, 00:12:53.773 "nvme_admin": false, 00:12:53.773 "nvme_io": false 00:12:53.773 }, 00:12:53.773 "memory_domains": [ 00:12:53.773 { 00:12:53.773 "dma_device_id": "system", 00:12:53.773 "dma_device_type": 1 00:12:53.773 }, 00:12:53.773 { 00:12:53.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.773 "dma_device_type": 2 00:12:53.773 } 00:12:53.773 ], 00:12:53.773 "driver_specific": {} 00:12:53.773 } 00:12:53.773 ] 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.773 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.031 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.031 "name": "Existed_Raid", 00:12:54.031 "uuid": "2585de2d-2461-42e6-81f5-2e32717df37d", 00:12:54.031 "strip_size_kb": 64, 00:12:54.031 "state": "configuring", 00:12:54.031 "raid_level": "raid0", 00:12:54.031 "superblock": true, 00:12:54.031 "num_base_bdevs": 2, 00:12:54.031 "num_base_bdevs_discovered": 1, 00:12:54.031 "num_base_bdevs_operational": 2, 00:12:54.031 "base_bdevs_list": [ 00:12:54.031 { 00:12:54.031 "name": "BaseBdev1", 00:12:54.031 "uuid": "aac3df4c-4842-457b-8191-6f23fe74617c", 00:12:54.031 "is_configured": true, 00:12:54.031 "data_offset": 2048, 00:12:54.031 "data_size": 63488 00:12:54.031 }, 00:12:54.031 { 00:12:54.031 "name": "BaseBdev2", 00:12:54.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.031 "is_configured": false, 00:12:54.031 "data_offset": 0, 00:12:54.031 "data_size": 0 00:12:54.031 } 00:12:54.031 ] 00:12:54.031 }' 00:12:54.031 18:57:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.031 18:57:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:54.855 [2024-06-10 18:57:09.465740] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.855 [2024-06-10 18:57:09.465781] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dad800 name Existed_Raid, state configuring 00:12:54.855 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:55.115 [2024-06-10 18:57:09.694374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.115 [2024-06-10 18:57:09.695738] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.115 [2024-06-10 18:57:09.695771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.115 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.374 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.374 "name": "Existed_Raid", 00:12:55.374 "uuid": "85d4a0c9-a571-4b13-be38-df7215d987d2", 00:12:55.374 "strip_size_kb": 64, 00:12:55.374 "state": "configuring", 00:12:55.374 "raid_level": "raid0", 00:12:55.374 "superblock": true, 00:12:55.374 "num_base_bdevs": 2, 00:12:55.374 "num_base_bdevs_discovered": 1, 00:12:55.374 "num_base_bdevs_operational": 2, 00:12:55.374 "base_bdevs_list": [ 00:12:55.374 { 00:12:55.374 "name": "BaseBdev1", 00:12:55.374 "uuid": "aac3df4c-4842-457b-8191-6f23fe74617c", 00:12:55.374 "is_configured": true, 00:12:55.374 "data_offset": 2048, 00:12:55.374 "data_size": 63488 00:12:55.374 }, 00:12:55.374 { 00:12:55.374 "name": "BaseBdev2", 00:12:55.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.374 "is_configured": false, 00:12:55.374 "data_offset": 0, 00:12:55.374 "data_size": 0 00:12:55.374 } 00:12:55.374 ] 00:12:55.374 }' 00:12:55.374 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.374 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.940 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.198 [2024-06-10 18:57:10.704187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.198 [2024-06-10 18:57:10.704331] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dae5f0 00:12:56.199 [2024-06-10 18:57:10.704344] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:56.199 [2024-06-10 18:57:10.704510] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f603b0 00:12:56.199 [2024-06-10 18:57:10.704634] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dae5f0 00:12:56.199 [2024-06-10 18:57:10.704644] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1dae5f0 00:12:56.199 [2024-06-10 18:57:10.704734] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.199 BaseBdev2 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.199 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.456 [ 00:12:56.456 { 00:12:56.456 "name": "BaseBdev2", 00:12:56.456 "aliases": [ 00:12:56.456 "2dacfc76-d711-46c4-9712-87605768d810" 00:12:56.456 ], 00:12:56.456 "product_name": "Malloc disk", 00:12:56.456 "block_size": 512, 00:12:56.456 "num_blocks": 65536, 00:12:56.456 "uuid": "2dacfc76-d711-46c4-9712-87605768d810", 00:12:56.456 "assigned_rate_limits": { 00:12:56.456 "rw_ios_per_sec": 0, 00:12:56.456 "rw_mbytes_per_sec": 0, 00:12:56.456 "r_mbytes_per_sec": 0, 00:12:56.456 "w_mbytes_per_sec": 0 00:12:56.456 }, 00:12:56.456 "claimed": true, 00:12:56.456 "claim_type": "exclusive_write", 00:12:56.456 "zoned": false, 00:12:56.456 "supported_io_types": { 00:12:56.456 "read": true, 00:12:56.456 "write": true, 00:12:56.456 "unmap": true, 00:12:56.456 "write_zeroes": true, 00:12:56.456 "flush": true, 00:12:56.456 "reset": true, 00:12:56.456 "compare": false, 00:12:56.456 "compare_and_write": false, 00:12:56.456 "abort": true, 00:12:56.456 "nvme_admin": false, 00:12:56.456 "nvme_io": false 00:12:56.456 }, 00:12:56.456 "memory_domains": [ 00:12:56.456 { 00:12:56.456 "dma_device_id": "system", 00:12:56.456 "dma_device_type": 1 00:12:56.456 }, 00:12:56.456 { 00:12:56.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.456 "dma_device_type": 2 00:12:56.456 } 00:12:56.456 ], 00:12:56.456 "driver_specific": {} 00:12:56.456 } 00:12:56.456 ] 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.456 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.713 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.713 "name": "Existed_Raid", 00:12:56.713 "uuid": "85d4a0c9-a571-4b13-be38-df7215d987d2", 00:12:56.713 "strip_size_kb": 64, 00:12:56.713 "state": "online", 00:12:56.713 "raid_level": "raid0", 00:12:56.713 "superblock": true, 00:12:56.713 "num_base_bdevs": 2, 00:12:56.714 "num_base_bdevs_discovered": 2, 00:12:56.714 "num_base_bdevs_operational": 2, 00:12:56.714 "base_bdevs_list": [ 00:12:56.714 { 00:12:56.714 "name": "BaseBdev1", 00:12:56.714 "uuid": "aac3df4c-4842-457b-8191-6f23fe74617c", 00:12:56.714 "is_configured": true, 00:12:56.714 "data_offset": 2048, 00:12:56.714 "data_size": 63488 00:12:56.714 }, 00:12:56.714 { 00:12:56.714 "name": "BaseBdev2", 00:12:56.714 "uuid": "2dacfc76-d711-46c4-9712-87605768d810", 00:12:56.714 "is_configured": true, 00:12:56.714 "data_offset": 2048, 00:12:56.714 "data_size": 63488 00:12:56.714 } 00:12:56.714 ] 00:12:56.714 }' 00:12:56.714 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.714 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:57.276 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:57.533 [2024-06-10 18:57:12.192483] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.533 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:57.533 "name": "Existed_Raid", 00:12:57.533 "aliases": [ 00:12:57.533 "85d4a0c9-a571-4b13-be38-df7215d987d2" 00:12:57.533 ], 00:12:57.533 "product_name": "Raid Volume", 00:12:57.533 "block_size": 512, 00:12:57.533 "num_blocks": 126976, 00:12:57.533 "uuid": "85d4a0c9-a571-4b13-be38-df7215d987d2", 00:12:57.533 "assigned_rate_limits": { 00:12:57.533 "rw_ios_per_sec": 0, 00:12:57.533 "rw_mbytes_per_sec": 0, 00:12:57.533 "r_mbytes_per_sec": 0, 00:12:57.533 "w_mbytes_per_sec": 0 00:12:57.533 }, 00:12:57.533 "claimed": false, 00:12:57.533 "zoned": false, 00:12:57.533 "supported_io_types": { 00:12:57.533 "read": true, 00:12:57.533 "write": true, 00:12:57.533 "unmap": true, 00:12:57.533 "write_zeroes": true, 00:12:57.533 "flush": true, 00:12:57.533 "reset": true, 00:12:57.533 "compare": false, 00:12:57.533 "compare_and_write": false, 00:12:57.533 "abort": false, 00:12:57.533 "nvme_admin": false, 00:12:57.533 "nvme_io": false 00:12:57.533 }, 00:12:57.533 "memory_domains": [ 00:12:57.533 { 00:12:57.533 "dma_device_id": "system", 00:12:57.533 "dma_device_type": 1 00:12:57.533 }, 00:12:57.533 { 00:12:57.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.533 "dma_device_type": 2 00:12:57.533 }, 00:12:57.533 { 00:12:57.533 "dma_device_id": "system", 00:12:57.533 "dma_device_type": 1 00:12:57.533 }, 00:12:57.533 { 00:12:57.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.533 "dma_device_type": 2 00:12:57.533 } 00:12:57.533 ], 00:12:57.533 "driver_specific": { 00:12:57.533 "raid": { 00:12:57.533 "uuid": "85d4a0c9-a571-4b13-be38-df7215d987d2", 00:12:57.533 "strip_size_kb": 64, 00:12:57.533 "state": "online", 00:12:57.533 "raid_level": "raid0", 00:12:57.533 "superblock": true, 00:12:57.533 "num_base_bdevs": 2, 00:12:57.533 "num_base_bdevs_discovered": 2, 00:12:57.533 "num_base_bdevs_operational": 2, 00:12:57.533 "base_bdevs_list": [ 00:12:57.533 { 00:12:57.533 "name": "BaseBdev1", 00:12:57.533 "uuid": "aac3df4c-4842-457b-8191-6f23fe74617c", 00:12:57.533 "is_configured": true, 00:12:57.533 "data_offset": 2048, 00:12:57.533 "data_size": 63488 00:12:57.533 }, 00:12:57.533 { 00:12:57.533 "name": "BaseBdev2", 00:12:57.533 "uuid": "2dacfc76-d711-46c4-9712-87605768d810", 00:12:57.533 "is_configured": true, 00:12:57.533 "data_offset": 2048, 00:12:57.533 "data_size": 63488 00:12:57.533 } 00:12:57.533 ] 00:12:57.533 } 00:12:57.533 } 00:12:57.533 }' 00:12:57.533 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.533 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:57.533 BaseBdev2' 00:12:57.533 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.533 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:57.533 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:57.790 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:57.790 "name": "BaseBdev1", 00:12:57.790 "aliases": [ 00:12:57.790 "aac3df4c-4842-457b-8191-6f23fe74617c" 00:12:57.790 ], 00:12:57.790 "product_name": "Malloc disk", 00:12:57.790 "block_size": 512, 00:12:57.790 "num_blocks": 65536, 00:12:57.790 "uuid": "aac3df4c-4842-457b-8191-6f23fe74617c", 00:12:57.790 "assigned_rate_limits": { 00:12:57.790 "rw_ios_per_sec": 0, 00:12:57.790 "rw_mbytes_per_sec": 0, 00:12:57.790 "r_mbytes_per_sec": 0, 00:12:57.790 "w_mbytes_per_sec": 0 00:12:57.790 }, 00:12:57.790 "claimed": true, 00:12:57.790 "claim_type": "exclusive_write", 00:12:57.790 "zoned": false, 00:12:57.790 "supported_io_types": { 00:12:57.790 "read": true, 00:12:57.790 "write": true, 00:12:57.790 "unmap": true, 00:12:57.790 "write_zeroes": true, 00:12:57.790 "flush": true, 00:12:57.790 "reset": true, 00:12:57.790 "compare": false, 00:12:57.790 "compare_and_write": false, 00:12:57.790 "abort": true, 00:12:57.790 "nvme_admin": false, 00:12:57.790 "nvme_io": false 00:12:57.790 }, 00:12:57.790 "memory_domains": [ 00:12:57.790 { 00:12:57.790 "dma_device_id": "system", 00:12:57.790 "dma_device_type": 1 00:12:57.790 }, 00:12:57.790 { 00:12:57.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.790 "dma_device_type": 2 00:12:57.790 } 00:12:57.790 ], 00:12:57.790 "driver_specific": {} 00:12:57.790 }' 00:12:57.791 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.791 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.049 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.307 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.307 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.307 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:58.307 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.307 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.307 "name": "BaseBdev2", 00:12:58.307 "aliases": [ 00:12:58.308 "2dacfc76-d711-46c4-9712-87605768d810" 00:12:58.308 ], 00:12:58.308 "product_name": "Malloc disk", 00:12:58.308 "block_size": 512, 00:12:58.308 "num_blocks": 65536, 00:12:58.308 "uuid": "2dacfc76-d711-46c4-9712-87605768d810", 00:12:58.308 "assigned_rate_limits": { 00:12:58.308 "rw_ios_per_sec": 0, 00:12:58.308 "rw_mbytes_per_sec": 0, 00:12:58.308 "r_mbytes_per_sec": 0, 00:12:58.308 "w_mbytes_per_sec": 0 00:12:58.308 }, 00:12:58.308 "claimed": true, 00:12:58.308 "claim_type": "exclusive_write", 00:12:58.308 "zoned": false, 00:12:58.308 "supported_io_types": { 00:12:58.308 "read": true, 00:12:58.308 "write": true, 00:12:58.308 "unmap": true, 00:12:58.308 "write_zeroes": true, 00:12:58.308 "flush": true, 00:12:58.308 "reset": true, 00:12:58.308 "compare": false, 00:12:58.308 "compare_and_write": false, 00:12:58.308 "abort": true, 00:12:58.308 "nvme_admin": false, 00:12:58.308 "nvme_io": false 00:12:58.308 }, 00:12:58.308 "memory_domains": [ 00:12:58.308 { 00:12:58.308 "dma_device_id": "system", 00:12:58.308 "dma_device_type": 1 00:12:58.308 }, 00:12:58.308 { 00:12:58.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.308 "dma_device_type": 2 00:12:58.308 } 00:12:58.308 ], 00:12:58.308 "driver_specific": {} 00:12:58.308 }' 00:12:58.308 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.566 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.824 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.824 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.824 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:59.083 [2024-06-10 18:57:13.600036] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.083 [2024-06-10 18:57:13.600060] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.083 [2024-06-10 18:57:13.600097] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:59.083 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.084 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.342 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.342 "name": "Existed_Raid", 00:12:59.342 "uuid": "85d4a0c9-a571-4b13-be38-df7215d987d2", 00:12:59.342 "strip_size_kb": 64, 00:12:59.342 "state": "offline", 00:12:59.342 "raid_level": "raid0", 00:12:59.342 "superblock": true, 00:12:59.342 "num_base_bdevs": 2, 00:12:59.342 "num_base_bdevs_discovered": 1, 00:12:59.342 "num_base_bdevs_operational": 1, 00:12:59.342 "base_bdevs_list": [ 00:12:59.342 { 00:12:59.342 "name": null, 00:12:59.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.342 "is_configured": false, 00:12:59.342 "data_offset": 2048, 00:12:59.342 "data_size": 63488 00:12:59.342 }, 00:12:59.342 { 00:12:59.342 "name": "BaseBdev2", 00:12:59.342 "uuid": "2dacfc76-d711-46c4-9712-87605768d810", 00:12:59.342 "is_configured": true, 00:12:59.342 "data_offset": 2048, 00:12:59.342 "data_size": 63488 00:12:59.342 } 00:12:59.342 ] 00:12:59.342 }' 00:12:59.342 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.342 18:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.909 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:00.168 [2024-06-10 18:57:14.816208] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.168 [2024-06-10 18:57:14.816253] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dae5f0 name Existed_Raid, state offline 00:13:00.168 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:00.168 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.168 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.168 18:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1619666 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1619666 ']' 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1619666 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1619666 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1619666' 00:13:00.427 killing process with pid 1619666 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1619666 00:13:00.427 [2024-06-10 18:57:15.149001] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.427 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1619666 00:13:00.427 [2024-06-10 18:57:15.149842] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.687 18:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:00.687 00:13:00.687 real 0m9.992s 00:13:00.687 user 0m17.695s 00:13:00.687 sys 0m1.876s 00:13:00.687 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:00.687 18:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.687 ************************************ 00:13:00.687 END TEST raid_state_function_test_sb 00:13:00.687 ************************************ 00:13:00.687 18:57:15 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:00.687 18:57:15 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:00.687 18:57:15 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:00.687 18:57:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.687 ************************************ 00:13:00.687 START TEST raid_superblock_test 00:13:00.687 ************************************ 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 2 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1621592 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1621592 /var/tmp/spdk-raid.sock 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1621592 ']' 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:00.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:00.687 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.946 [2024-06-10 18:57:15.480245] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:00.946 [2024-06-10 18:57:15.480302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621592 ] 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:00.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:00.946 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:00.946 [2024-06-10 18:57:15.614650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.946 [2024-06-10 18:57:15.702274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.205 [2024-06-10 18:57:15.768028] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.205 [2024-06-10 18:57:15.768066] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:01.772 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:02.031 malloc1 00:13:02.031 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:02.290 [2024-06-10 18:57:16.822128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:02.290 [2024-06-10 18:57:16.822168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.290 [2024-06-10 18:57:16.822187] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x272bb70 00:13:02.290 [2024-06-10 18:57:16.822198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.290 [2024-06-10 18:57:16.823634] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.290 [2024-06-10 18:57:16.823660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:02.290 pt1 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:02.290 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:02.549 malloc2 00:13:02.549 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:02.549 [2024-06-10 18:57:17.275680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:02.549 [2024-06-10 18:57:17.275719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.549 [2024-06-10 18:57:17.275734] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x272cf70 00:13:02.549 [2024-06-10 18:57:17.275746] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.549 [2024-06-10 18:57:17.277109] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.549 [2024-06-10 18:57:17.277135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:02.549 pt2 00:13:02.549 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:02.549 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:02.549 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:02.809 [2024-06-10 18:57:17.504296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:02.810 [2024-06-10 18:57:17.505410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:02.810 [2024-06-10 18:57:17.505534] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28cf870 00:13:02.810 [2024-06-10 18:57:17.505546] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.810 [2024-06-10 18:57:17.505721] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28c5140 00:13:02.810 [2024-06-10 18:57:17.505846] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28cf870 00:13:02.810 [2024-06-10 18:57:17.505855] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28cf870 00:13:02.810 [2024-06-10 18:57:17.505940] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.810 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.068 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.068 "name": "raid_bdev1", 00:13:03.068 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:03.068 "strip_size_kb": 64, 00:13:03.068 "state": "online", 00:13:03.068 "raid_level": "raid0", 00:13:03.068 "superblock": true, 00:13:03.068 "num_base_bdevs": 2, 00:13:03.068 "num_base_bdevs_discovered": 2, 00:13:03.068 "num_base_bdevs_operational": 2, 00:13:03.068 "base_bdevs_list": [ 00:13:03.068 { 00:13:03.068 "name": "pt1", 00:13:03.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.068 "is_configured": true, 00:13:03.068 "data_offset": 2048, 00:13:03.068 "data_size": 63488 00:13:03.068 }, 00:13:03.068 { 00:13:03.068 "name": "pt2", 00:13:03.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.068 "is_configured": true, 00:13:03.068 "data_offset": 2048, 00:13:03.068 "data_size": 63488 00:13:03.068 } 00:13:03.068 ] 00:13:03.068 }' 00:13:03.068 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.068 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:03.635 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:03.942 [2024-06-10 18:57:18.515127] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.942 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:03.942 "name": "raid_bdev1", 00:13:03.942 "aliases": [ 00:13:03.942 "ed1cc164-11fe-4d7e-900b-4e7018d25ef1" 00:13:03.942 ], 00:13:03.942 "product_name": "Raid Volume", 00:13:03.942 "block_size": 512, 00:13:03.942 "num_blocks": 126976, 00:13:03.942 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:03.942 "assigned_rate_limits": { 00:13:03.942 "rw_ios_per_sec": 0, 00:13:03.942 "rw_mbytes_per_sec": 0, 00:13:03.942 "r_mbytes_per_sec": 0, 00:13:03.942 "w_mbytes_per_sec": 0 00:13:03.942 }, 00:13:03.942 "claimed": false, 00:13:03.942 "zoned": false, 00:13:03.942 "supported_io_types": { 00:13:03.942 "read": true, 00:13:03.942 "write": true, 00:13:03.942 "unmap": true, 00:13:03.942 "write_zeroes": true, 00:13:03.942 "flush": true, 00:13:03.942 "reset": true, 00:13:03.942 "compare": false, 00:13:03.942 "compare_and_write": false, 00:13:03.942 "abort": false, 00:13:03.942 "nvme_admin": false, 00:13:03.942 "nvme_io": false 00:13:03.942 }, 00:13:03.942 "memory_domains": [ 00:13:03.942 { 00:13:03.942 "dma_device_id": "system", 00:13:03.942 "dma_device_type": 1 00:13:03.942 }, 00:13:03.942 { 00:13:03.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.942 "dma_device_type": 2 00:13:03.942 }, 00:13:03.942 { 00:13:03.942 "dma_device_id": "system", 00:13:03.942 "dma_device_type": 1 00:13:03.942 }, 00:13:03.942 { 00:13:03.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.942 "dma_device_type": 2 00:13:03.942 } 00:13:03.942 ], 00:13:03.942 "driver_specific": { 00:13:03.942 "raid": { 00:13:03.942 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:03.942 "strip_size_kb": 64, 00:13:03.942 "state": "online", 00:13:03.942 "raid_level": "raid0", 00:13:03.942 "superblock": true, 00:13:03.942 "num_base_bdevs": 2, 00:13:03.942 "num_base_bdevs_discovered": 2, 00:13:03.942 "num_base_bdevs_operational": 2, 00:13:03.942 "base_bdevs_list": [ 00:13:03.942 { 00:13:03.942 "name": "pt1", 00:13:03.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.942 "is_configured": true, 00:13:03.942 "data_offset": 2048, 00:13:03.942 "data_size": 63488 00:13:03.942 }, 00:13:03.942 { 00:13:03.942 "name": "pt2", 00:13:03.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.942 "is_configured": true, 00:13:03.942 "data_offset": 2048, 00:13:03.942 "data_size": 63488 00:13:03.942 } 00:13:03.942 ] 00:13:03.942 } 00:13:03.942 } 00:13:03.942 }' 00:13:03.942 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.942 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:03.942 pt2' 00:13:03.942 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:03.942 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:03.942 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:04.217 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:04.217 "name": "pt1", 00:13:04.217 "aliases": [ 00:13:04.217 "00000000-0000-0000-0000-000000000001" 00:13:04.217 ], 00:13:04.217 "product_name": "passthru", 00:13:04.217 "block_size": 512, 00:13:04.217 "num_blocks": 65536, 00:13:04.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.217 "assigned_rate_limits": { 00:13:04.217 "rw_ios_per_sec": 0, 00:13:04.217 "rw_mbytes_per_sec": 0, 00:13:04.217 "r_mbytes_per_sec": 0, 00:13:04.217 "w_mbytes_per_sec": 0 00:13:04.217 }, 00:13:04.217 "claimed": true, 00:13:04.217 "claim_type": "exclusive_write", 00:13:04.217 "zoned": false, 00:13:04.217 "supported_io_types": { 00:13:04.217 "read": true, 00:13:04.217 "write": true, 00:13:04.217 "unmap": true, 00:13:04.217 "write_zeroes": true, 00:13:04.217 "flush": true, 00:13:04.217 "reset": true, 00:13:04.217 "compare": false, 00:13:04.217 "compare_and_write": false, 00:13:04.217 "abort": true, 00:13:04.217 "nvme_admin": false, 00:13:04.217 "nvme_io": false 00:13:04.217 }, 00:13:04.217 "memory_domains": [ 00:13:04.217 { 00:13:04.217 "dma_device_id": "system", 00:13:04.217 "dma_device_type": 1 00:13:04.217 }, 00:13:04.217 { 00:13:04.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.217 "dma_device_type": 2 00:13:04.217 } 00:13:04.217 ], 00:13:04.217 "driver_specific": { 00:13:04.217 "passthru": { 00:13:04.217 "name": "pt1", 00:13:04.217 "base_bdev_name": "malloc1" 00:13:04.217 } 00:13:04.217 } 00:13:04.217 }' 00:13:04.217 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:04.217 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:04.217 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:04.217 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:04.218 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:04.476 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:04.476 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:04.476 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:04.734 "name": "pt2", 00:13:04.734 "aliases": [ 00:13:04.734 "00000000-0000-0000-0000-000000000002" 00:13:04.734 ], 00:13:04.734 "product_name": "passthru", 00:13:04.734 "block_size": 512, 00:13:04.734 "num_blocks": 65536, 00:13:04.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.734 "assigned_rate_limits": { 00:13:04.734 "rw_ios_per_sec": 0, 00:13:04.734 "rw_mbytes_per_sec": 0, 00:13:04.734 "r_mbytes_per_sec": 0, 00:13:04.734 "w_mbytes_per_sec": 0 00:13:04.734 }, 00:13:04.734 "claimed": true, 00:13:04.734 "claim_type": "exclusive_write", 00:13:04.734 "zoned": false, 00:13:04.734 "supported_io_types": { 00:13:04.734 "read": true, 00:13:04.734 "write": true, 00:13:04.734 "unmap": true, 00:13:04.734 "write_zeroes": true, 00:13:04.734 "flush": true, 00:13:04.734 "reset": true, 00:13:04.734 "compare": false, 00:13:04.734 "compare_and_write": false, 00:13:04.734 "abort": true, 00:13:04.734 "nvme_admin": false, 00:13:04.734 "nvme_io": false 00:13:04.734 }, 00:13:04.734 "memory_domains": [ 00:13:04.734 { 00:13:04.734 "dma_device_id": "system", 00:13:04.734 "dma_device_type": 1 00:13:04.734 }, 00:13:04.734 { 00:13:04.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.734 "dma_device_type": 2 00:13:04.734 } 00:13:04.734 ], 00:13:04.734 "driver_specific": { 00:13:04.734 "passthru": { 00:13:04.734 "name": "pt2", 00:13:04.734 "base_bdev_name": "malloc2" 00:13:04.734 } 00:13:04.734 } 00:13:04.734 }' 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:04.734 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:04.993 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:05.250 [2024-06-10 18:57:19.778466] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.250 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ed1cc164-11fe-4d7e-900b-4e7018d25ef1 00:13:05.250 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ed1cc164-11fe-4d7e-900b-4e7018d25ef1 ']' 00:13:05.250 18:57:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:05.508 [2024-06-10 18:57:20.010883] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.508 [2024-06-10 18:57:20.010903] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.508 [2024-06-10 18:57:20.010956] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.508 [2024-06-10 18:57:20.010997] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.508 [2024-06-10 18:57:20.011010] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28cf870 name raid_bdev1, state offline 00:13:05.508 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.508 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:05.766 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:05.766 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:05.766 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.766 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:05.767 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.767 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:06.024 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:06.024 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:06.282 18:57:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:06.541 [2024-06-10 18:57:21.161876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:06.541 [2024-06-10 18:57:21.163146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:06.541 [2024-06-10 18:57:21.163199] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:06.541 [2024-06-10 18:57:21.163238] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:06.541 [2024-06-10 18:57:21.163256] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.541 [2024-06-10 18:57:21.163265] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x272c010 name raid_bdev1, state configuring 00:13:06.541 request: 00:13:06.541 { 00:13:06.541 "name": "raid_bdev1", 00:13:06.541 "raid_level": "raid0", 00:13:06.541 "base_bdevs": [ 00:13:06.541 "malloc1", 00:13:06.541 "malloc2" 00:13:06.541 ], 00:13:06.541 "superblock": false, 00:13:06.541 "strip_size_kb": 64, 00:13:06.541 "method": "bdev_raid_create", 00:13:06.541 "req_id": 1 00:13:06.541 } 00:13:06.541 Got JSON-RPC error response 00:13:06.541 response: 00:13:06.541 { 00:13:06.541 "code": -17, 00:13:06.541 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:06.541 } 00:13:06.541 18:57:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:13:06.541 18:57:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:06.541 18:57:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:06.541 18:57:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:06.541 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.541 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:06.798 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:06.798 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:06.798 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:07.056 [2024-06-10 18:57:21.619023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:07.056 [2024-06-10 18:57:21.619064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.056 [2024-06-10 18:57:21.619080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28cf5f0 00:13:07.056 [2024-06-10 18:57:21.619092] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.056 [2024-06-10 18:57:21.620586] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.056 [2024-06-10 18:57:21.620613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:07.056 [2024-06-10 18:57:21.620680] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:07.056 [2024-06-10 18:57:21.620703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:07.056 pt1 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.056 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.314 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.314 "name": "raid_bdev1", 00:13:07.314 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:07.314 "strip_size_kb": 64, 00:13:07.314 "state": "configuring", 00:13:07.314 "raid_level": "raid0", 00:13:07.314 "superblock": true, 00:13:07.314 "num_base_bdevs": 2, 00:13:07.314 "num_base_bdevs_discovered": 1, 00:13:07.314 "num_base_bdevs_operational": 2, 00:13:07.314 "base_bdevs_list": [ 00:13:07.314 { 00:13:07.314 "name": "pt1", 00:13:07.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.314 "is_configured": true, 00:13:07.314 "data_offset": 2048, 00:13:07.314 "data_size": 63488 00:13:07.314 }, 00:13:07.314 { 00:13:07.314 "name": null, 00:13:07.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.314 "is_configured": false, 00:13:07.314 "data_offset": 2048, 00:13:07.314 "data_size": 63488 00:13:07.314 } 00:13:07.314 ] 00:13:07.314 }' 00:13:07.314 18:57:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.314 18:57:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.878 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:13:07.878 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:07.878 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:07.878 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:08.136 [2024-06-10 18:57:22.645743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:08.136 [2024-06-10 18:57:22.645799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.136 [2024-06-10 18:57:22.645816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x272bda0 00:13:08.136 [2024-06-10 18:57:22.645827] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.136 [2024-06-10 18:57:22.646140] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.136 [2024-06-10 18:57:22.646155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:08.136 [2024-06-10 18:57:22.646215] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:08.136 [2024-06-10 18:57:22.646232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.136 [2024-06-10 18:57:22.646316] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28c4920 00:13:08.136 [2024-06-10 18:57:22.646326] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:08.136 [2024-06-10 18:57:22.646481] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27254e0 00:13:08.136 [2024-06-10 18:57:22.646603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28c4920 00:13:08.136 [2024-06-10 18:57:22.646613] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x28c4920 00:13:08.136 [2024-06-10 18:57:22.646706] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.136 pt2 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.136 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.395 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.395 "name": "raid_bdev1", 00:13:08.395 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:08.395 "strip_size_kb": 64, 00:13:08.395 "state": "online", 00:13:08.395 "raid_level": "raid0", 00:13:08.395 "superblock": true, 00:13:08.395 "num_base_bdevs": 2, 00:13:08.395 "num_base_bdevs_discovered": 2, 00:13:08.395 "num_base_bdevs_operational": 2, 00:13:08.395 "base_bdevs_list": [ 00:13:08.395 { 00:13:08.395 "name": "pt1", 00:13:08.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.395 "is_configured": true, 00:13:08.395 "data_offset": 2048, 00:13:08.395 "data_size": 63488 00:13:08.395 }, 00:13:08.395 { 00:13:08.395 "name": "pt2", 00:13:08.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.395 "is_configured": true, 00:13:08.395 "data_offset": 2048, 00:13:08.395 "data_size": 63488 00:13:08.395 } 00:13:08.395 ] 00:13:08.395 }' 00:13:08.395 18:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.395 18:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:08.655 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:08.914 [2024-06-10 18:57:23.612486] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.914 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:08.914 "name": "raid_bdev1", 00:13:08.914 "aliases": [ 00:13:08.914 "ed1cc164-11fe-4d7e-900b-4e7018d25ef1" 00:13:08.914 ], 00:13:08.914 "product_name": "Raid Volume", 00:13:08.914 "block_size": 512, 00:13:08.914 "num_blocks": 126976, 00:13:08.914 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:08.914 "assigned_rate_limits": { 00:13:08.914 "rw_ios_per_sec": 0, 00:13:08.914 "rw_mbytes_per_sec": 0, 00:13:08.914 "r_mbytes_per_sec": 0, 00:13:08.914 "w_mbytes_per_sec": 0 00:13:08.914 }, 00:13:08.914 "claimed": false, 00:13:08.914 "zoned": false, 00:13:08.914 "supported_io_types": { 00:13:08.914 "read": true, 00:13:08.914 "write": true, 00:13:08.914 "unmap": true, 00:13:08.914 "write_zeroes": true, 00:13:08.914 "flush": true, 00:13:08.914 "reset": true, 00:13:08.914 "compare": false, 00:13:08.914 "compare_and_write": false, 00:13:08.914 "abort": false, 00:13:08.914 "nvme_admin": false, 00:13:08.914 "nvme_io": false 00:13:08.914 }, 00:13:08.914 "memory_domains": [ 00:13:08.914 { 00:13:08.914 "dma_device_id": "system", 00:13:08.914 "dma_device_type": 1 00:13:08.914 }, 00:13:08.914 { 00:13:08.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.914 "dma_device_type": 2 00:13:08.914 }, 00:13:08.914 { 00:13:08.914 "dma_device_id": "system", 00:13:08.914 "dma_device_type": 1 00:13:08.914 }, 00:13:08.914 { 00:13:08.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.914 "dma_device_type": 2 00:13:08.914 } 00:13:08.914 ], 00:13:08.914 "driver_specific": { 00:13:08.914 "raid": { 00:13:08.914 "uuid": "ed1cc164-11fe-4d7e-900b-4e7018d25ef1", 00:13:08.914 "strip_size_kb": 64, 00:13:08.914 "state": "online", 00:13:08.914 "raid_level": "raid0", 00:13:08.914 "superblock": true, 00:13:08.914 "num_base_bdevs": 2, 00:13:08.914 "num_base_bdevs_discovered": 2, 00:13:08.914 "num_base_bdevs_operational": 2, 00:13:08.914 "base_bdevs_list": [ 00:13:08.914 { 00:13:08.914 "name": "pt1", 00:13:08.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.914 "is_configured": true, 00:13:08.914 "data_offset": 2048, 00:13:08.914 "data_size": 63488 00:13:08.914 }, 00:13:08.914 { 00:13:08.914 "name": "pt2", 00:13:08.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.914 "is_configured": true, 00:13:08.914 "data_offset": 2048, 00:13:08.914 "data_size": 63488 00:13:08.914 } 00:13:08.914 ] 00:13:08.914 } 00:13:08.914 } 00:13:08.914 }' 00:13:08.914 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.172 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:09.172 pt2' 00:13:09.172 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.173 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:09.173 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:09.173 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:09.173 "name": "pt1", 00:13:09.173 "aliases": [ 00:13:09.173 "00000000-0000-0000-0000-000000000001" 00:13:09.173 ], 00:13:09.173 "product_name": "passthru", 00:13:09.173 "block_size": 512, 00:13:09.173 "num_blocks": 65536, 00:13:09.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:09.173 "assigned_rate_limits": { 00:13:09.173 "rw_ios_per_sec": 0, 00:13:09.173 "rw_mbytes_per_sec": 0, 00:13:09.173 "r_mbytes_per_sec": 0, 00:13:09.173 "w_mbytes_per_sec": 0 00:13:09.173 }, 00:13:09.173 "claimed": true, 00:13:09.173 "claim_type": "exclusive_write", 00:13:09.173 "zoned": false, 00:13:09.173 "supported_io_types": { 00:13:09.173 "read": true, 00:13:09.173 "write": true, 00:13:09.173 "unmap": true, 00:13:09.173 "write_zeroes": true, 00:13:09.173 "flush": true, 00:13:09.173 "reset": true, 00:13:09.173 "compare": false, 00:13:09.173 "compare_and_write": false, 00:13:09.173 "abort": true, 00:13:09.173 "nvme_admin": false, 00:13:09.173 "nvme_io": false 00:13:09.173 }, 00:13:09.173 "memory_domains": [ 00:13:09.173 { 00:13:09.173 "dma_device_id": "system", 00:13:09.173 "dma_device_type": 1 00:13:09.173 }, 00:13:09.173 { 00:13:09.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.173 "dma_device_type": 2 00:13:09.173 } 00:13:09.173 ], 00:13:09.173 "driver_specific": { 00:13:09.173 "passthru": { 00:13:09.173 "name": "pt1", 00:13:09.173 "base_bdev_name": "malloc1" 00:13:09.173 } 00:13:09.173 } 00:13:09.173 }' 00:13:09.173 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.430 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.430 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:09.430 18:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.430 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.430 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:09.430 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.430 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.430 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:09.430 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.688 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.688 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:09.688 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.688 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:09.688 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:09.946 "name": "pt2", 00:13:09.946 "aliases": [ 00:13:09.946 "00000000-0000-0000-0000-000000000002" 00:13:09.946 ], 00:13:09.946 "product_name": "passthru", 00:13:09.946 "block_size": 512, 00:13:09.946 "num_blocks": 65536, 00:13:09.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.946 "assigned_rate_limits": { 00:13:09.946 "rw_ios_per_sec": 0, 00:13:09.946 "rw_mbytes_per_sec": 0, 00:13:09.946 "r_mbytes_per_sec": 0, 00:13:09.946 "w_mbytes_per_sec": 0 00:13:09.946 }, 00:13:09.946 "claimed": true, 00:13:09.946 "claim_type": "exclusive_write", 00:13:09.946 "zoned": false, 00:13:09.946 "supported_io_types": { 00:13:09.946 "read": true, 00:13:09.946 "write": true, 00:13:09.946 "unmap": true, 00:13:09.946 "write_zeroes": true, 00:13:09.946 "flush": true, 00:13:09.946 "reset": true, 00:13:09.946 "compare": false, 00:13:09.946 "compare_and_write": false, 00:13:09.946 "abort": true, 00:13:09.946 "nvme_admin": false, 00:13:09.946 "nvme_io": false 00:13:09.946 }, 00:13:09.946 "memory_domains": [ 00:13:09.946 { 00:13:09.946 "dma_device_id": "system", 00:13:09.946 "dma_device_type": 1 00:13:09.946 }, 00:13:09.946 { 00:13:09.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.946 "dma_device_type": 2 00:13:09.946 } 00:13:09.946 ], 00:13:09.946 "driver_specific": { 00:13:09.946 "passthru": { 00:13:09.946 "name": "pt2", 00:13:09.946 "base_bdev_name": "malloc2" 00:13:09.946 } 00:13:09.946 } 00:13:09.946 }' 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.946 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.203 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.203 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.203 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.203 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.203 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:10.203 18:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:10.461 [2024-06-10 18:57:25.020189] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ed1cc164-11fe-4d7e-900b-4e7018d25ef1 '!=' ed1cc164-11fe-4d7e-900b-4e7018d25ef1 ']' 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1621592 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1621592 ']' 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1621592 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1621592 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1621592' 00:13:10.461 killing process with pid 1621592 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1621592 00:13:10.461 [2024-06-10 18:57:25.102652] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.461 [2024-06-10 18:57:25.102704] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.461 [2024-06-10 18:57:25.102743] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.461 [2024-06-10 18:57:25.102754] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28c4920 name raid_bdev1, state offline 00:13:10.461 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1621592 00:13:10.461 [2024-06-10 18:57:25.118421] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.720 18:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:10.720 00:13:10.720 real 0m9.889s 00:13:10.720 user 0m17.609s 00:13:10.720 sys 0m1.844s 00:13:10.720 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:10.720 18:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.720 ************************************ 00:13:10.720 END TEST raid_superblock_test 00:13:10.720 ************************************ 00:13:10.720 18:57:25 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:13:10.720 18:57:25 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:10.720 18:57:25 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:10.720 18:57:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.720 ************************************ 00:13:10.720 START TEST raid_read_error_test 00:13:10.720 ************************************ 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 2 read 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.UijsMdq7kg 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1623671 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1623671 /var/tmp/spdk-raid.sock 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1623671 ']' 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:10.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:10.720 18:57:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.720 [2024-06-10 18:57:25.465038] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:10.720 [2024-06-10 18:57:25.465093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623671 ] 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:10.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.979 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:10.979 [2024-06-10 18:57:25.597758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.979 [2024-06-10 18:57:25.683764] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.238 [2024-06-10 18:57:25.743977] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.238 [2024-06-10 18:57:25.744015] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.803 18:57:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:11.803 18:57:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:13:11.803 18:57:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:11.803 18:57:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:12.061 BaseBdev1_malloc 00:13:12.061 18:57:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:12.061 true 00:13:12.061 18:57:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:12.319 [2024-06-10 18:57:27.009164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:12.319 [2024-06-10 18:57:27.009203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.319 [2024-06-10 18:57:27.009220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1920d50 00:13:12.319 [2024-06-10 18:57:27.009232] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.319 [2024-06-10 18:57:27.010773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.319 [2024-06-10 18:57:27.010801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:12.319 BaseBdev1 00:13:12.319 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:12.319 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:12.577 BaseBdev2_malloc 00:13:12.577 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:12.836 true 00:13:12.836 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:13.094 [2024-06-10 18:57:27.683147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:13.094 [2024-06-10 18:57:27.683186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.094 [2024-06-10 18:57:27.683203] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19262e0 00:13:13.094 [2024-06-10 18:57:27.683215] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.094 [2024-06-10 18:57:27.684590] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.094 [2024-06-10 18:57:27.684617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:13.094 BaseBdev2 00:13:13.094 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:13.352 [2024-06-10 18:57:27.907766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.352 [2024-06-10 18:57:27.908941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.352 [2024-06-10 18:57:27.909117] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1927640 00:13:13.352 [2024-06-10 18:57:27.909129] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:13.352 [2024-06-10 18:57:27.909305] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177bcc0 00:13:13.352 [2024-06-10 18:57:27.909437] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1927640 00:13:13.352 [2024-06-10 18:57:27.909446] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1927640 00:13:13.352 [2024-06-10 18:57:27.909537] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.352 18:57:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.610 18:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.610 "name": "raid_bdev1", 00:13:13.610 "uuid": "577f2e6d-db52-423b-8c7d-be054f687926", 00:13:13.610 "strip_size_kb": 64, 00:13:13.610 "state": "online", 00:13:13.610 "raid_level": "raid0", 00:13:13.610 "superblock": true, 00:13:13.610 "num_base_bdevs": 2, 00:13:13.610 "num_base_bdevs_discovered": 2, 00:13:13.610 "num_base_bdevs_operational": 2, 00:13:13.610 "base_bdevs_list": [ 00:13:13.610 { 00:13:13.610 "name": "BaseBdev1", 00:13:13.610 "uuid": "8a782e4a-07d7-5ae4-8f69-728aec46da3c", 00:13:13.610 "is_configured": true, 00:13:13.610 "data_offset": 2048, 00:13:13.610 "data_size": 63488 00:13:13.610 }, 00:13:13.610 { 00:13:13.610 "name": "BaseBdev2", 00:13:13.610 "uuid": "ef1aa533-133e-5524-ba53-3bd3b2a98b06", 00:13:13.610 "is_configured": true, 00:13:13.610 "data_offset": 2048, 00:13:13.610 "data_size": 63488 00:13:13.610 } 00:13:13.610 ] 00:13:13.610 }' 00:13:13.610 18:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.610 18:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.175 18:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:14.175 18:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:14.175 [2024-06-10 18:57:28.822383] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1922730 00:13:15.109 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.367 18:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.625 18:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.625 "name": "raid_bdev1", 00:13:15.625 "uuid": "577f2e6d-db52-423b-8c7d-be054f687926", 00:13:15.625 "strip_size_kb": 64, 00:13:15.625 "state": "online", 00:13:15.625 "raid_level": "raid0", 00:13:15.625 "superblock": true, 00:13:15.625 "num_base_bdevs": 2, 00:13:15.625 "num_base_bdevs_discovered": 2, 00:13:15.625 "num_base_bdevs_operational": 2, 00:13:15.625 "base_bdevs_list": [ 00:13:15.625 { 00:13:15.625 "name": "BaseBdev1", 00:13:15.625 "uuid": "8a782e4a-07d7-5ae4-8f69-728aec46da3c", 00:13:15.625 "is_configured": true, 00:13:15.625 "data_offset": 2048, 00:13:15.625 "data_size": 63488 00:13:15.625 }, 00:13:15.625 { 00:13:15.625 "name": "BaseBdev2", 00:13:15.625 "uuid": "ef1aa533-133e-5524-ba53-3bd3b2a98b06", 00:13:15.625 "is_configured": true, 00:13:15.625 "data_offset": 2048, 00:13:15.625 "data_size": 63488 00:13:15.625 } 00:13:15.625 ] 00:13:15.625 }' 00:13:15.625 18:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.625 18:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.190 18:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:16.448 [2024-06-10 18:57:30.961393] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.448 [2024-06-10 18:57:30.961425] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.448 [2024-06-10 18:57:30.964337] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.448 [2024-06-10 18:57:30.964379] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.448 [2024-06-10 18:57:30.964403] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.448 [2024-06-10 18:57:30.964412] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1927640 name raid_bdev1, state offline 00:13:16.448 0 00:13:16.448 18:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1623671 00:13:16.448 18:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1623671 ']' 00:13:16.448 18:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1623671 00:13:16.448 18:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:13:16.448 18:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:16.448 18:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1623671 00:13:16.448 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:16.448 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:16.448 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1623671' 00:13:16.448 killing process with pid 1623671 00:13:16.448 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1623671 00:13:16.448 [2024-06-10 18:57:31.039782] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.448 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1623671 00:13:16.448 [2024-06-10 18:57:31.049252] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.UijsMdq7kg 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:13:16.706 00:13:16.706 real 0m5.866s 00:13:16.706 user 0m9.104s 00:13:16.706 sys 0m1.023s 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:16.706 18:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.706 ************************************ 00:13:16.706 END TEST raid_read_error_test 00:13:16.706 ************************************ 00:13:16.706 18:57:31 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:13:16.706 18:57:31 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:16.706 18:57:31 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:16.706 18:57:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.706 ************************************ 00:13:16.706 START TEST raid_write_error_test 00:13:16.706 ************************************ 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 2 write 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:16.706 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.YKtDFd2efh 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1624720 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1624720 /var/tmp/spdk-raid.sock 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1624720 ']' 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:16.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:16.707 18:57:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.707 [2024-06-10 18:57:31.417774] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:16.707 [2024-06-10 18:57:31.417833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624720 ] 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:16.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:16.965 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:16.965 [2024-06-10 18:57:31.552404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.965 [2024-06-10 18:57:31.634962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.965 [2024-06-10 18:57:31.693733] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.965 [2024-06-10 18:57:31.693799] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.900 18:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:17.900 18:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:13:17.900 18:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:17.900 18:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.900 BaseBdev1_malloc 00:13:17.900 18:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:18.158 true 00:13:18.158 18:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:18.417 [2024-06-10 18:57:32.975934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:18.417 [2024-06-10 18:57:32.975982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.417 [2024-06-10 18:57:32.976000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xda3d50 00:13:18.417 [2024-06-10 18:57:32.976011] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.417 [2024-06-10 18:57:32.977495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.417 [2024-06-10 18:57:32.977523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:18.417 BaseBdev1 00:13:18.417 18:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:18.417 18:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:18.676 BaseBdev2_malloc 00:13:18.676 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:18.934 true 00:13:18.934 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:18.934 [2024-06-10 18:57:33.670039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:18.935 [2024-06-10 18:57:33.670077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.935 [2024-06-10 18:57:33.670093] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xda92e0 00:13:18.935 [2024-06-10 18:57:33.670105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.935 [2024-06-10 18:57:33.671360] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.935 [2024-06-10 18:57:33.671386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:18.935 BaseBdev2 00:13:18.935 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:19.193 [2024-06-10 18:57:33.894658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.193 [2024-06-10 18:57:33.895774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.193 [2024-06-10 18:57:33.895947] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdaa640 00:13:19.193 [2024-06-10 18:57:33.895960] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:19.193 [2024-06-10 18:57:33.896128] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbfecc0 00:13:19.193 [2024-06-10 18:57:33.896258] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdaa640 00:13:19.193 [2024-06-10 18:57:33.896267] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xdaa640 00:13:19.193 [2024-06-10 18:57:33.896355] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.193 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.194 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.194 18:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.452 18:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.452 "name": "raid_bdev1", 00:13:19.452 "uuid": "8c5e4ccc-6a50-4957-a679-0cb93e4dc263", 00:13:19.452 "strip_size_kb": 64, 00:13:19.452 "state": "online", 00:13:19.452 "raid_level": "raid0", 00:13:19.452 "superblock": true, 00:13:19.452 "num_base_bdevs": 2, 00:13:19.452 "num_base_bdevs_discovered": 2, 00:13:19.452 "num_base_bdevs_operational": 2, 00:13:19.452 "base_bdevs_list": [ 00:13:19.452 { 00:13:19.452 "name": "BaseBdev1", 00:13:19.452 "uuid": "8ee963d0-94c2-51da-ac48-ebaf6eb6e426", 00:13:19.452 "is_configured": true, 00:13:19.452 "data_offset": 2048, 00:13:19.452 "data_size": 63488 00:13:19.452 }, 00:13:19.452 { 00:13:19.452 "name": "BaseBdev2", 00:13:19.452 "uuid": "ae5831ab-0595-59d2-954d-717b6f11f74f", 00:13:19.452 "is_configured": true, 00:13:19.452 "data_offset": 2048, 00:13:19.452 "data_size": 63488 00:13:19.452 } 00:13:19.452 ] 00:13:19.452 }' 00:13:19.452 18:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.452 18:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.019 18:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:20.019 18:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:20.277 [2024-06-10 18:57:34.829331] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xda5730 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.214 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.472 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.472 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.472 18:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.472 18:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.472 "name": "raid_bdev1", 00:13:21.472 "uuid": "8c5e4ccc-6a50-4957-a679-0cb93e4dc263", 00:13:21.472 "strip_size_kb": 64, 00:13:21.472 "state": "online", 00:13:21.472 "raid_level": "raid0", 00:13:21.472 "superblock": true, 00:13:21.472 "num_base_bdevs": 2, 00:13:21.472 "num_base_bdevs_discovered": 2, 00:13:21.472 "num_base_bdevs_operational": 2, 00:13:21.472 "base_bdevs_list": [ 00:13:21.472 { 00:13:21.472 "name": "BaseBdev1", 00:13:21.472 "uuid": "8ee963d0-94c2-51da-ac48-ebaf6eb6e426", 00:13:21.472 "is_configured": true, 00:13:21.472 "data_offset": 2048, 00:13:21.472 "data_size": 63488 00:13:21.472 }, 00:13:21.472 { 00:13:21.472 "name": "BaseBdev2", 00:13:21.472 "uuid": "ae5831ab-0595-59d2-954d-717b6f11f74f", 00:13:21.472 "is_configured": true, 00:13:21.472 "data_offset": 2048, 00:13:21.472 "data_size": 63488 00:13:21.472 } 00:13:21.472 ] 00:13:21.472 }' 00:13:21.472 18:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.472 18:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 18:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:22.298 [2024-06-10 18:57:36.971391] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.298 [2024-06-10 18:57:36.971429] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.298 [2024-06-10 18:57:36.974379] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.298 [2024-06-10 18:57:36.974410] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.298 [2024-06-10 18:57:36.974434] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.298 [2024-06-10 18:57:36.974444] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdaa640 name raid_bdev1, state offline 00:13:22.298 0 00:13:22.298 18:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1624720 00:13:22.298 18:57:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1624720 ']' 00:13:22.298 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1624720 00:13:22.298 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:13:22.298 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:22.298 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1624720 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1624720' 00:13:22.556 killing process with pid 1624720 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1624720 00:13:22.556 [2024-06-10 18:57:37.061066] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1624720 00:13:22.556 [2024-06-10 18:57:37.070945] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:22.556 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.YKtDFd2efh 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:13:22.557 00:13:22.557 real 0m5.938s 00:13:22.557 user 0m9.227s 00:13:22.557 sys 0m1.039s 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:22.557 18:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.557 ************************************ 00:13:22.557 END TEST raid_write_error_test 00:13:22.557 ************************************ 00:13:22.815 18:57:37 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:22.815 18:57:37 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:22.815 18:57:37 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:22.815 18:57:37 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:22.815 18:57:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.815 ************************************ 00:13:22.815 START TEST raid_state_function_test 00:13:22.815 ************************************ 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 2 false 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1625786 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1625786' 00:13:22.815 Process raid pid: 1625786 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1625786 /var/tmp/spdk-raid.sock 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1625786 ']' 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:22.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:22.815 18:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.815 [2024-06-10 18:57:37.427658] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:22.815 [2024-06-10 18:57:37.427713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.815 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:22.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.815 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:22.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.815 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:22.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:22.816 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:22.816 [2024-06-10 18:57:37.563796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.073 [2024-06-10 18:57:37.652173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.073 [2024-06-10 18:57:37.714319] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.073 [2024-06-10 18:57:37.714344] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.672 18:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:23.672 18:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:13:23.672 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:23.963 [2024-06-10 18:57:38.536209] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.963 [2024-06-10 18:57:38.536248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.963 [2024-06-10 18:57:38.536258] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.963 [2024-06-10 18:57:38.536269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.963 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.221 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.221 "name": "Existed_Raid", 00:13:24.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.221 "strip_size_kb": 64, 00:13:24.221 "state": "configuring", 00:13:24.221 "raid_level": "concat", 00:13:24.221 "superblock": false, 00:13:24.221 "num_base_bdevs": 2, 00:13:24.221 "num_base_bdevs_discovered": 0, 00:13:24.221 "num_base_bdevs_operational": 2, 00:13:24.221 "base_bdevs_list": [ 00:13:24.221 { 00:13:24.221 "name": "BaseBdev1", 00:13:24.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.221 "is_configured": false, 00:13:24.221 "data_offset": 0, 00:13:24.221 "data_size": 0 00:13:24.221 }, 00:13:24.221 { 00:13:24.221 "name": "BaseBdev2", 00:13:24.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.221 "is_configured": false, 00:13:24.221 "data_offset": 0, 00:13:24.221 "data_size": 0 00:13:24.221 } 00:13:24.221 ] 00:13:24.221 }' 00:13:24.221 18:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.221 18:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.786 18:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:25.044 [2024-06-10 18:57:39.566788] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.044 [2024-06-10 18:57:39.566817] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1655f10 name Existed_Raid, state configuring 00:13:25.044 18:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:25.044 [2024-06-10 18:57:39.795396] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.044 [2024-06-10 18:57:39.795422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.044 [2024-06-10 18:57:39.795431] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.044 [2024-06-10 18:57:39.795442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.303 18:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.303 [2024-06-10 18:57:40.031179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.303 BaseBdev1 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:25.303 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.562 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.820 [ 00:13:25.820 { 00:13:25.820 "name": "BaseBdev1", 00:13:25.820 "aliases": [ 00:13:25.820 "5e4581cf-d7b5-4b16-9a0b-935f3da37726" 00:13:25.820 ], 00:13:25.820 "product_name": "Malloc disk", 00:13:25.820 "block_size": 512, 00:13:25.820 "num_blocks": 65536, 00:13:25.820 "uuid": "5e4581cf-d7b5-4b16-9a0b-935f3da37726", 00:13:25.820 "assigned_rate_limits": { 00:13:25.820 "rw_ios_per_sec": 0, 00:13:25.820 "rw_mbytes_per_sec": 0, 00:13:25.820 "r_mbytes_per_sec": 0, 00:13:25.820 "w_mbytes_per_sec": 0 00:13:25.820 }, 00:13:25.820 "claimed": true, 00:13:25.820 "claim_type": "exclusive_write", 00:13:25.820 "zoned": false, 00:13:25.820 "supported_io_types": { 00:13:25.820 "read": true, 00:13:25.820 "write": true, 00:13:25.820 "unmap": true, 00:13:25.820 "write_zeroes": true, 00:13:25.820 "flush": true, 00:13:25.820 "reset": true, 00:13:25.820 "compare": false, 00:13:25.820 "compare_and_write": false, 00:13:25.820 "abort": true, 00:13:25.820 "nvme_admin": false, 00:13:25.820 "nvme_io": false 00:13:25.820 }, 00:13:25.820 "memory_domains": [ 00:13:25.820 { 00:13:25.820 "dma_device_id": "system", 00:13:25.820 "dma_device_type": 1 00:13:25.820 }, 00:13:25.820 { 00:13:25.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.820 "dma_device_type": 2 00:13:25.820 } 00:13:25.820 ], 00:13:25.820 "driver_specific": {} 00:13:25.820 } 00:13:25.820 ] 00:13:25.820 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:13:25.820 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:25.820 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.820 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.821 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.079 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.079 "name": "Existed_Raid", 00:13:26.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.079 "strip_size_kb": 64, 00:13:26.079 "state": "configuring", 00:13:26.079 "raid_level": "concat", 00:13:26.079 "superblock": false, 00:13:26.079 "num_base_bdevs": 2, 00:13:26.079 "num_base_bdevs_discovered": 1, 00:13:26.079 "num_base_bdevs_operational": 2, 00:13:26.079 "base_bdevs_list": [ 00:13:26.079 { 00:13:26.079 "name": "BaseBdev1", 00:13:26.079 "uuid": "5e4581cf-d7b5-4b16-9a0b-935f3da37726", 00:13:26.079 "is_configured": true, 00:13:26.079 "data_offset": 0, 00:13:26.079 "data_size": 65536 00:13:26.079 }, 00:13:26.079 { 00:13:26.079 "name": "BaseBdev2", 00:13:26.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.079 "is_configured": false, 00:13:26.079 "data_offset": 0, 00:13:26.079 "data_size": 0 00:13:26.079 } 00:13:26.079 ] 00:13:26.079 }' 00:13:26.079 18:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.079 18:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.644 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:26.903 [2024-06-10 18:57:41.535117] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.903 [2024-06-10 18:57:41.535152] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1655800 name Existed_Raid, state configuring 00:13:26.903 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:27.161 [2024-06-10 18:57:41.763748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.161 [2024-06-10 18:57:41.765094] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.161 [2024-06-10 18:57:41.765127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.161 18:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.420 18:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.420 "name": "Existed_Raid", 00:13:27.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.420 "strip_size_kb": 64, 00:13:27.420 "state": "configuring", 00:13:27.420 "raid_level": "concat", 00:13:27.420 "superblock": false, 00:13:27.420 "num_base_bdevs": 2, 00:13:27.420 "num_base_bdevs_discovered": 1, 00:13:27.420 "num_base_bdevs_operational": 2, 00:13:27.420 "base_bdevs_list": [ 00:13:27.420 { 00:13:27.420 "name": "BaseBdev1", 00:13:27.420 "uuid": "5e4581cf-d7b5-4b16-9a0b-935f3da37726", 00:13:27.420 "is_configured": true, 00:13:27.420 "data_offset": 0, 00:13:27.420 "data_size": 65536 00:13:27.420 }, 00:13:27.420 { 00:13:27.420 "name": "BaseBdev2", 00:13:27.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.420 "is_configured": false, 00:13:27.420 "data_offset": 0, 00:13:27.420 "data_size": 0 00:13:27.420 } 00:13:27.420 ] 00:13:27.420 }' 00:13:27.420 18:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.420 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.986 18:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:28.245 [2024-06-10 18:57:42.801758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.245 [2024-06-10 18:57:42.801791] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16565f0 00:13:28.245 [2024-06-10 18:57:42.801799] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:28.245 [2024-06-10 18:57:42.802030] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18083b0 00:13:28.245 [2024-06-10 18:57:42.802135] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16565f0 00:13:28.245 [2024-06-10 18:57:42.802144] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16565f0 00:13:28.245 [2024-06-10 18:57:42.802291] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.245 BaseBdev2 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:28.245 18:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:28.503 18:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:28.761 [ 00:13:28.761 { 00:13:28.761 "name": "BaseBdev2", 00:13:28.761 "aliases": [ 00:13:28.761 "6c807867-6aa0-49cf-911e-02c0031b054a" 00:13:28.761 ], 00:13:28.761 "product_name": "Malloc disk", 00:13:28.761 "block_size": 512, 00:13:28.761 "num_blocks": 65536, 00:13:28.761 "uuid": "6c807867-6aa0-49cf-911e-02c0031b054a", 00:13:28.761 "assigned_rate_limits": { 00:13:28.761 "rw_ios_per_sec": 0, 00:13:28.761 "rw_mbytes_per_sec": 0, 00:13:28.761 "r_mbytes_per_sec": 0, 00:13:28.761 "w_mbytes_per_sec": 0 00:13:28.761 }, 00:13:28.761 "claimed": true, 00:13:28.761 "claim_type": "exclusive_write", 00:13:28.761 "zoned": false, 00:13:28.761 "supported_io_types": { 00:13:28.761 "read": true, 00:13:28.761 "write": true, 00:13:28.761 "unmap": true, 00:13:28.761 "write_zeroes": true, 00:13:28.761 "flush": true, 00:13:28.761 "reset": true, 00:13:28.761 "compare": false, 00:13:28.761 "compare_and_write": false, 00:13:28.761 "abort": true, 00:13:28.761 "nvme_admin": false, 00:13:28.761 "nvme_io": false 00:13:28.761 }, 00:13:28.761 "memory_domains": [ 00:13:28.761 { 00:13:28.761 "dma_device_id": "system", 00:13:28.761 "dma_device_type": 1 00:13:28.761 }, 00:13:28.761 { 00:13:28.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.761 "dma_device_type": 2 00:13:28.761 } 00:13:28.761 ], 00:13:28.761 "driver_specific": {} 00:13:28.761 } 00:13:28.761 ] 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.761 "name": "Existed_Raid", 00:13:28.761 "uuid": "28508cb4-524d-489f-a667-88cd32e2069d", 00:13:28.761 "strip_size_kb": 64, 00:13:28.761 "state": "online", 00:13:28.761 "raid_level": "concat", 00:13:28.761 "superblock": false, 00:13:28.761 "num_base_bdevs": 2, 00:13:28.761 "num_base_bdevs_discovered": 2, 00:13:28.761 "num_base_bdevs_operational": 2, 00:13:28.761 "base_bdevs_list": [ 00:13:28.761 { 00:13:28.761 "name": "BaseBdev1", 00:13:28.761 "uuid": "5e4581cf-d7b5-4b16-9a0b-935f3da37726", 00:13:28.761 "is_configured": true, 00:13:28.761 "data_offset": 0, 00:13:28.761 "data_size": 65536 00:13:28.761 }, 00:13:28.761 { 00:13:28.761 "name": "BaseBdev2", 00:13:28.761 "uuid": "6c807867-6aa0-49cf-911e-02c0031b054a", 00:13:28.761 "is_configured": true, 00:13:28.761 "data_offset": 0, 00:13:28.761 "data_size": 65536 00:13:28.761 } 00:13:28.761 ] 00:13:28.761 }' 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.761 18:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:29.696 [2024-06-10 18:57:44.301924] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:29.696 "name": "Existed_Raid", 00:13:29.696 "aliases": [ 00:13:29.696 "28508cb4-524d-489f-a667-88cd32e2069d" 00:13:29.696 ], 00:13:29.696 "product_name": "Raid Volume", 00:13:29.696 "block_size": 512, 00:13:29.696 "num_blocks": 131072, 00:13:29.696 "uuid": "28508cb4-524d-489f-a667-88cd32e2069d", 00:13:29.696 "assigned_rate_limits": { 00:13:29.696 "rw_ios_per_sec": 0, 00:13:29.696 "rw_mbytes_per_sec": 0, 00:13:29.696 "r_mbytes_per_sec": 0, 00:13:29.696 "w_mbytes_per_sec": 0 00:13:29.696 }, 00:13:29.696 "claimed": false, 00:13:29.696 "zoned": false, 00:13:29.696 "supported_io_types": { 00:13:29.696 "read": true, 00:13:29.696 "write": true, 00:13:29.696 "unmap": true, 00:13:29.696 "write_zeroes": true, 00:13:29.696 "flush": true, 00:13:29.696 "reset": true, 00:13:29.696 "compare": false, 00:13:29.696 "compare_and_write": false, 00:13:29.696 "abort": false, 00:13:29.696 "nvme_admin": false, 00:13:29.696 "nvme_io": false 00:13:29.696 }, 00:13:29.696 "memory_domains": [ 00:13:29.696 { 00:13:29.696 "dma_device_id": "system", 00:13:29.696 "dma_device_type": 1 00:13:29.696 }, 00:13:29.696 { 00:13:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.696 "dma_device_type": 2 00:13:29.696 }, 00:13:29.696 { 00:13:29.696 "dma_device_id": "system", 00:13:29.696 "dma_device_type": 1 00:13:29.696 }, 00:13:29.696 { 00:13:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.696 "dma_device_type": 2 00:13:29.696 } 00:13:29.696 ], 00:13:29.696 "driver_specific": { 00:13:29.696 "raid": { 00:13:29.696 "uuid": "28508cb4-524d-489f-a667-88cd32e2069d", 00:13:29.696 "strip_size_kb": 64, 00:13:29.696 "state": "online", 00:13:29.696 "raid_level": "concat", 00:13:29.696 "superblock": false, 00:13:29.696 "num_base_bdevs": 2, 00:13:29.696 "num_base_bdevs_discovered": 2, 00:13:29.696 "num_base_bdevs_operational": 2, 00:13:29.696 "base_bdevs_list": [ 00:13:29.696 { 00:13:29.696 "name": "BaseBdev1", 00:13:29.696 "uuid": "5e4581cf-d7b5-4b16-9a0b-935f3da37726", 00:13:29.696 "is_configured": true, 00:13:29.696 "data_offset": 0, 00:13:29.696 "data_size": 65536 00:13:29.696 }, 00:13:29.696 { 00:13:29.696 "name": "BaseBdev2", 00:13:29.696 "uuid": "6c807867-6aa0-49cf-911e-02c0031b054a", 00:13:29.696 "is_configured": true, 00:13:29.696 "data_offset": 0, 00:13:29.696 "data_size": 65536 00:13:29.696 } 00:13:29.696 ] 00:13:29.696 } 00:13:29.696 } 00:13:29.696 }' 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:29.696 BaseBdev2' 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:29.696 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:29.954 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:29.954 "name": "BaseBdev1", 00:13:29.954 "aliases": [ 00:13:29.954 "5e4581cf-d7b5-4b16-9a0b-935f3da37726" 00:13:29.954 ], 00:13:29.954 "product_name": "Malloc disk", 00:13:29.954 "block_size": 512, 00:13:29.954 "num_blocks": 65536, 00:13:29.954 "uuid": "5e4581cf-d7b5-4b16-9a0b-935f3da37726", 00:13:29.954 "assigned_rate_limits": { 00:13:29.954 "rw_ios_per_sec": 0, 00:13:29.954 "rw_mbytes_per_sec": 0, 00:13:29.954 "r_mbytes_per_sec": 0, 00:13:29.954 "w_mbytes_per_sec": 0 00:13:29.954 }, 00:13:29.954 "claimed": true, 00:13:29.954 "claim_type": "exclusive_write", 00:13:29.954 "zoned": false, 00:13:29.954 "supported_io_types": { 00:13:29.954 "read": true, 00:13:29.954 "write": true, 00:13:29.954 "unmap": true, 00:13:29.954 "write_zeroes": true, 00:13:29.954 "flush": true, 00:13:29.954 "reset": true, 00:13:29.954 "compare": false, 00:13:29.954 "compare_and_write": false, 00:13:29.954 "abort": true, 00:13:29.954 "nvme_admin": false, 00:13:29.954 "nvme_io": false 00:13:29.954 }, 00:13:29.954 "memory_domains": [ 00:13:29.954 { 00:13:29.954 "dma_device_id": "system", 00:13:29.954 "dma_device_type": 1 00:13:29.954 }, 00:13:29.954 { 00:13:29.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.954 "dma_device_type": 2 00:13:29.954 } 00:13:29.954 ], 00:13:29.954 "driver_specific": {} 00:13:29.954 }' 00:13:29.954 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:29.954 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:29.954 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:29.954 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:30.212 18:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:30.470 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:30.470 "name": "BaseBdev2", 00:13:30.470 "aliases": [ 00:13:30.470 "6c807867-6aa0-49cf-911e-02c0031b054a" 00:13:30.470 ], 00:13:30.470 "product_name": "Malloc disk", 00:13:30.470 "block_size": 512, 00:13:30.470 "num_blocks": 65536, 00:13:30.470 "uuid": "6c807867-6aa0-49cf-911e-02c0031b054a", 00:13:30.470 "assigned_rate_limits": { 00:13:30.470 "rw_ios_per_sec": 0, 00:13:30.470 "rw_mbytes_per_sec": 0, 00:13:30.470 "r_mbytes_per_sec": 0, 00:13:30.470 "w_mbytes_per_sec": 0 00:13:30.470 }, 00:13:30.470 "claimed": true, 00:13:30.470 "claim_type": "exclusive_write", 00:13:30.470 "zoned": false, 00:13:30.470 "supported_io_types": { 00:13:30.470 "read": true, 00:13:30.470 "write": true, 00:13:30.470 "unmap": true, 00:13:30.470 "write_zeroes": true, 00:13:30.470 "flush": true, 00:13:30.470 "reset": true, 00:13:30.470 "compare": false, 00:13:30.470 "compare_and_write": false, 00:13:30.470 "abort": true, 00:13:30.470 "nvme_admin": false, 00:13:30.470 "nvme_io": false 00:13:30.470 }, 00:13:30.470 "memory_domains": [ 00:13:30.470 { 00:13:30.470 "dma_device_id": "system", 00:13:30.470 "dma_device_type": 1 00:13:30.470 }, 00:13:30.470 { 00:13:30.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.470 "dma_device_type": 2 00:13:30.470 } 00:13:30.470 ], 00:13:30.470 "driver_specific": {} 00:13:30.470 }' 00:13:30.470 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.470 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.728 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:30.986 [2024-06-10 18:57:45.681393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.986 [2024-06-10 18:57:45.681418] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.986 [2024-06-10 18:57:45.681454] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.986 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.244 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.244 "name": "Existed_Raid", 00:13:31.244 "uuid": "28508cb4-524d-489f-a667-88cd32e2069d", 00:13:31.244 "strip_size_kb": 64, 00:13:31.244 "state": "offline", 00:13:31.244 "raid_level": "concat", 00:13:31.244 "superblock": false, 00:13:31.244 "num_base_bdevs": 2, 00:13:31.244 "num_base_bdevs_discovered": 1, 00:13:31.244 "num_base_bdevs_operational": 1, 00:13:31.244 "base_bdevs_list": [ 00:13:31.244 { 00:13:31.244 "name": null, 00:13:31.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.244 "is_configured": false, 00:13:31.244 "data_offset": 0, 00:13:31.244 "data_size": 65536 00:13:31.244 }, 00:13:31.244 { 00:13:31.244 "name": "BaseBdev2", 00:13:31.244 "uuid": "6c807867-6aa0-49cf-911e-02c0031b054a", 00:13:31.244 "is_configured": true, 00:13:31.244 "data_offset": 0, 00:13:31.244 "data_size": 65536 00:13:31.244 } 00:13:31.244 ] 00:13:31.244 }' 00:13:31.244 18:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.244 18:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.810 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:31.810 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:31.810 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.810 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:32.068 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:32.068 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.068 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:32.326 [2024-06-10 18:57:46.929702] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.326 [2024-06-10 18:57:46.929748] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16565f0 name Existed_Raid, state offline 00:13:32.326 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:32.326 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:32.326 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.326 18:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1625786 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1625786 ']' 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1625786 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1625786 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1625786' 00:13:32.584 killing process with pid 1625786 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1625786 00:13:32.584 [2024-06-10 18:57:47.246922] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.584 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1625786 00:13:32.584 [2024-06-10 18:57:47.247784] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:32.842 00:13:32.842 real 0m10.077s 00:13:32.842 user 0m17.864s 00:13:32.842 sys 0m1.908s 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.842 ************************************ 00:13:32.842 END TEST raid_state_function_test 00:13:32.842 ************************************ 00:13:32.842 18:57:47 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:32.842 18:57:47 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:32.842 18:57:47 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:32.842 18:57:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.842 ************************************ 00:13:32.842 START TEST raid_state_function_test_sb 00:13:32.842 ************************************ 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 2 true 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1627836 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1627836' 00:13:32.842 Process raid pid: 1627836 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1627836 /var/tmp/spdk-raid.sock 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1627836 ']' 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:32.842 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.842 [2024-06-10 18:57:47.596015] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:32.842 [2024-06-10 18:57:47.596073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:33.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:33.101 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:33.101 [2024-06-10 18:57:47.732241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.101 [2024-06-10 18:57:47.818356] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.360 [2024-06-10 18:57:47.886836] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.360 [2024-06-10 18:57:47.886864] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.927 18:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:33.927 18:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:13:33.927 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:34.186 [2024-06-10 18:57:48.689375] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.186 [2024-06-10 18:57:48.689412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.186 [2024-06-10 18:57:48.689422] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.186 [2024-06-10 18:57:48.689434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.186 "name": "Existed_Raid", 00:13:34.186 "uuid": "134b7efb-0f8e-4167-87ee-c976ead8da53", 00:13:34.186 "strip_size_kb": 64, 00:13:34.186 "state": "configuring", 00:13:34.186 "raid_level": "concat", 00:13:34.186 "superblock": true, 00:13:34.186 "num_base_bdevs": 2, 00:13:34.186 "num_base_bdevs_discovered": 0, 00:13:34.186 "num_base_bdevs_operational": 2, 00:13:34.186 "base_bdevs_list": [ 00:13:34.186 { 00:13:34.186 "name": "BaseBdev1", 00:13:34.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.186 "is_configured": false, 00:13:34.186 "data_offset": 0, 00:13:34.186 "data_size": 0 00:13:34.186 }, 00:13:34.186 { 00:13:34.186 "name": "BaseBdev2", 00:13:34.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.186 "is_configured": false, 00:13:34.186 "data_offset": 0, 00:13:34.186 "data_size": 0 00:13:34.186 } 00:13:34.186 ] 00:13:34.186 }' 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.186 18:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.752 18:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:35.011 [2024-06-10 18:57:49.683879] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.011 [2024-06-10 18:57:49.683907] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbe1f10 name Existed_Raid, state configuring 00:13:35.011 18:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:35.269 [2024-06-10 18:57:49.912491] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.269 [2024-06-10 18:57:49.912516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.269 [2024-06-10 18:57:49.912525] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.269 [2024-06-10 18:57:49.912536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.269 18:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.527 [2024-06-10 18:57:50.154654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.527 BaseBdev1 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:35.527 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.785 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:36.043 [ 00:13:36.043 { 00:13:36.043 "name": "BaseBdev1", 00:13:36.043 "aliases": [ 00:13:36.043 "5219989e-0f31-4deb-a09e-894ff7368d77" 00:13:36.043 ], 00:13:36.043 "product_name": "Malloc disk", 00:13:36.043 "block_size": 512, 00:13:36.043 "num_blocks": 65536, 00:13:36.043 "uuid": "5219989e-0f31-4deb-a09e-894ff7368d77", 00:13:36.043 "assigned_rate_limits": { 00:13:36.043 "rw_ios_per_sec": 0, 00:13:36.043 "rw_mbytes_per_sec": 0, 00:13:36.043 "r_mbytes_per_sec": 0, 00:13:36.043 "w_mbytes_per_sec": 0 00:13:36.043 }, 00:13:36.043 "claimed": true, 00:13:36.043 "claim_type": "exclusive_write", 00:13:36.043 "zoned": false, 00:13:36.043 "supported_io_types": { 00:13:36.043 "read": true, 00:13:36.043 "write": true, 00:13:36.043 "unmap": true, 00:13:36.043 "write_zeroes": true, 00:13:36.043 "flush": true, 00:13:36.043 "reset": true, 00:13:36.043 "compare": false, 00:13:36.043 "compare_and_write": false, 00:13:36.043 "abort": true, 00:13:36.043 "nvme_admin": false, 00:13:36.043 "nvme_io": false 00:13:36.043 }, 00:13:36.043 "memory_domains": [ 00:13:36.043 { 00:13:36.043 "dma_device_id": "system", 00:13:36.043 "dma_device_type": 1 00:13:36.043 }, 00:13:36.043 { 00:13:36.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.043 "dma_device_type": 2 00:13:36.043 } 00:13:36.043 ], 00:13:36.043 "driver_specific": {} 00:13:36.043 } 00:13:36.043 ] 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.043 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.301 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.301 "name": "Existed_Raid", 00:13:36.301 "uuid": "06fbdd35-f0d0-4635-bd52-f52751e7e856", 00:13:36.301 "strip_size_kb": 64, 00:13:36.301 "state": "configuring", 00:13:36.301 "raid_level": "concat", 00:13:36.301 "superblock": true, 00:13:36.301 "num_base_bdevs": 2, 00:13:36.301 "num_base_bdevs_discovered": 1, 00:13:36.301 "num_base_bdevs_operational": 2, 00:13:36.301 "base_bdevs_list": [ 00:13:36.301 { 00:13:36.301 "name": "BaseBdev1", 00:13:36.301 "uuid": "5219989e-0f31-4deb-a09e-894ff7368d77", 00:13:36.301 "is_configured": true, 00:13:36.301 "data_offset": 2048, 00:13:36.301 "data_size": 63488 00:13:36.301 }, 00:13:36.301 { 00:13:36.301 "name": "BaseBdev2", 00:13:36.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.301 "is_configured": false, 00:13:36.301 "data_offset": 0, 00:13:36.301 "data_size": 0 00:13:36.301 } 00:13:36.301 ] 00:13:36.301 }' 00:13:36.301 18:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.301 18:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.868 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:36.868 [2024-06-10 18:57:51.614469] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.868 [2024-06-10 18:57:51.614503] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbe1800 name Existed_Raid, state configuring 00:13:37.126 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:37.126 [2024-06-10 18:57:51.843101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.126 [2024-06-10 18:57:51.844436] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.126 [2024-06-10 18:57:51.844467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.126 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:37.126 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:37.126 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:37.126 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.127 18:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.385 18:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.385 "name": "Existed_Raid", 00:13:37.385 "uuid": "bf05c673-45d9-448d-b9c9-57a83e715a6c", 00:13:37.385 "strip_size_kb": 64, 00:13:37.385 "state": "configuring", 00:13:37.385 "raid_level": "concat", 00:13:37.385 "superblock": true, 00:13:37.385 "num_base_bdevs": 2, 00:13:37.385 "num_base_bdevs_discovered": 1, 00:13:37.385 "num_base_bdevs_operational": 2, 00:13:37.385 "base_bdevs_list": [ 00:13:37.385 { 00:13:37.385 "name": "BaseBdev1", 00:13:37.385 "uuid": "5219989e-0f31-4deb-a09e-894ff7368d77", 00:13:37.385 "is_configured": true, 00:13:37.385 "data_offset": 2048, 00:13:37.385 "data_size": 63488 00:13:37.385 }, 00:13:37.385 { 00:13:37.385 "name": "BaseBdev2", 00:13:37.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.385 "is_configured": false, 00:13:37.385 "data_offset": 0, 00:13:37.385 "data_size": 0 00:13:37.385 } 00:13:37.385 ] 00:13:37.385 }' 00:13:37.385 18:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.385 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.951 18:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.209 [2024-06-10 18:57:52.868897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.209 [2024-06-10 18:57:52.869025] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbe25f0 00:13:38.209 [2024-06-10 18:57:52.869037] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:38.209 [2024-06-10 18:57:52.869197] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd943b0 00:13:38.209 [2024-06-10 18:57:52.869299] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbe25f0 00:13:38.209 [2024-06-10 18:57:52.869309] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xbe25f0 00:13:38.209 [2024-06-10 18:57:52.869388] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.209 BaseBdev2 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:38.209 18:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.467 18:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.726 [ 00:13:38.726 { 00:13:38.726 "name": "BaseBdev2", 00:13:38.726 "aliases": [ 00:13:38.726 "a4249424-a145-4f88-b3f9-ea6749ecb665" 00:13:38.726 ], 00:13:38.726 "product_name": "Malloc disk", 00:13:38.726 "block_size": 512, 00:13:38.726 "num_blocks": 65536, 00:13:38.726 "uuid": "a4249424-a145-4f88-b3f9-ea6749ecb665", 00:13:38.726 "assigned_rate_limits": { 00:13:38.726 "rw_ios_per_sec": 0, 00:13:38.726 "rw_mbytes_per_sec": 0, 00:13:38.726 "r_mbytes_per_sec": 0, 00:13:38.726 "w_mbytes_per_sec": 0 00:13:38.726 }, 00:13:38.726 "claimed": true, 00:13:38.726 "claim_type": "exclusive_write", 00:13:38.726 "zoned": false, 00:13:38.726 "supported_io_types": { 00:13:38.726 "read": true, 00:13:38.726 "write": true, 00:13:38.726 "unmap": true, 00:13:38.726 "write_zeroes": true, 00:13:38.726 "flush": true, 00:13:38.726 "reset": true, 00:13:38.726 "compare": false, 00:13:38.726 "compare_and_write": false, 00:13:38.726 "abort": true, 00:13:38.726 "nvme_admin": false, 00:13:38.726 "nvme_io": false 00:13:38.726 }, 00:13:38.726 "memory_domains": [ 00:13:38.726 { 00:13:38.726 "dma_device_id": "system", 00:13:38.726 "dma_device_type": 1 00:13:38.726 }, 00:13:38.726 { 00:13:38.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.726 "dma_device_type": 2 00:13:38.726 } 00:13:38.726 ], 00:13:38.726 "driver_specific": {} 00:13:38.726 } 00:13:38.726 ] 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.726 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.984 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.984 "name": "Existed_Raid", 00:13:38.984 "uuid": "bf05c673-45d9-448d-b9c9-57a83e715a6c", 00:13:38.984 "strip_size_kb": 64, 00:13:38.984 "state": "online", 00:13:38.984 "raid_level": "concat", 00:13:38.984 "superblock": true, 00:13:38.984 "num_base_bdevs": 2, 00:13:38.984 "num_base_bdevs_discovered": 2, 00:13:38.984 "num_base_bdevs_operational": 2, 00:13:38.984 "base_bdevs_list": [ 00:13:38.984 { 00:13:38.984 "name": "BaseBdev1", 00:13:38.984 "uuid": "5219989e-0f31-4deb-a09e-894ff7368d77", 00:13:38.984 "is_configured": true, 00:13:38.984 "data_offset": 2048, 00:13:38.984 "data_size": 63488 00:13:38.984 }, 00:13:38.984 { 00:13:38.984 "name": "BaseBdev2", 00:13:38.984 "uuid": "a4249424-a145-4f88-b3f9-ea6749ecb665", 00:13:38.984 "is_configured": true, 00:13:38.984 "data_offset": 2048, 00:13:38.984 "data_size": 63488 00:13:38.984 } 00:13:38.984 ] 00:13:38.984 }' 00:13:38.984 18:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.984 18:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.550 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.550 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:39.550 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:39.551 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:39.551 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:39.551 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:39.551 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:39.551 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:39.809 [2024-06-10 18:57:54.357038] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.809 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:39.809 "name": "Existed_Raid", 00:13:39.809 "aliases": [ 00:13:39.809 "bf05c673-45d9-448d-b9c9-57a83e715a6c" 00:13:39.809 ], 00:13:39.809 "product_name": "Raid Volume", 00:13:39.809 "block_size": 512, 00:13:39.809 "num_blocks": 126976, 00:13:39.809 "uuid": "bf05c673-45d9-448d-b9c9-57a83e715a6c", 00:13:39.809 "assigned_rate_limits": { 00:13:39.809 "rw_ios_per_sec": 0, 00:13:39.809 "rw_mbytes_per_sec": 0, 00:13:39.809 "r_mbytes_per_sec": 0, 00:13:39.809 "w_mbytes_per_sec": 0 00:13:39.809 }, 00:13:39.809 "claimed": false, 00:13:39.809 "zoned": false, 00:13:39.809 "supported_io_types": { 00:13:39.809 "read": true, 00:13:39.809 "write": true, 00:13:39.809 "unmap": true, 00:13:39.809 "write_zeroes": true, 00:13:39.809 "flush": true, 00:13:39.809 "reset": true, 00:13:39.809 "compare": false, 00:13:39.809 "compare_and_write": false, 00:13:39.809 "abort": false, 00:13:39.809 "nvme_admin": false, 00:13:39.809 "nvme_io": false 00:13:39.809 }, 00:13:39.809 "memory_domains": [ 00:13:39.809 { 00:13:39.809 "dma_device_id": "system", 00:13:39.809 "dma_device_type": 1 00:13:39.809 }, 00:13:39.809 { 00:13:39.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.809 "dma_device_type": 2 00:13:39.809 }, 00:13:39.809 { 00:13:39.809 "dma_device_id": "system", 00:13:39.809 "dma_device_type": 1 00:13:39.809 }, 00:13:39.809 { 00:13:39.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.809 "dma_device_type": 2 00:13:39.809 } 00:13:39.809 ], 00:13:39.809 "driver_specific": { 00:13:39.809 "raid": { 00:13:39.809 "uuid": "bf05c673-45d9-448d-b9c9-57a83e715a6c", 00:13:39.809 "strip_size_kb": 64, 00:13:39.809 "state": "online", 00:13:39.809 "raid_level": "concat", 00:13:39.809 "superblock": true, 00:13:39.809 "num_base_bdevs": 2, 00:13:39.809 "num_base_bdevs_discovered": 2, 00:13:39.809 "num_base_bdevs_operational": 2, 00:13:39.809 "base_bdevs_list": [ 00:13:39.809 { 00:13:39.809 "name": "BaseBdev1", 00:13:39.809 "uuid": "5219989e-0f31-4deb-a09e-894ff7368d77", 00:13:39.809 "is_configured": true, 00:13:39.809 "data_offset": 2048, 00:13:39.809 "data_size": 63488 00:13:39.809 }, 00:13:39.809 { 00:13:39.809 "name": "BaseBdev2", 00:13:39.809 "uuid": "a4249424-a145-4f88-b3f9-ea6749ecb665", 00:13:39.809 "is_configured": true, 00:13:39.809 "data_offset": 2048, 00:13:39.809 "data_size": 63488 00:13:39.809 } 00:13:39.809 ] 00:13:39.809 } 00:13:39.809 } 00:13:39.809 }' 00:13:39.809 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.809 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:39.809 BaseBdev2' 00:13:39.809 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:39.809 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:39.809 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.067 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.067 "name": "BaseBdev1", 00:13:40.067 "aliases": [ 00:13:40.067 "5219989e-0f31-4deb-a09e-894ff7368d77" 00:13:40.067 ], 00:13:40.067 "product_name": "Malloc disk", 00:13:40.067 "block_size": 512, 00:13:40.067 "num_blocks": 65536, 00:13:40.067 "uuid": "5219989e-0f31-4deb-a09e-894ff7368d77", 00:13:40.067 "assigned_rate_limits": { 00:13:40.067 "rw_ios_per_sec": 0, 00:13:40.067 "rw_mbytes_per_sec": 0, 00:13:40.067 "r_mbytes_per_sec": 0, 00:13:40.067 "w_mbytes_per_sec": 0 00:13:40.067 }, 00:13:40.067 "claimed": true, 00:13:40.067 "claim_type": "exclusive_write", 00:13:40.067 "zoned": false, 00:13:40.067 "supported_io_types": { 00:13:40.067 "read": true, 00:13:40.067 "write": true, 00:13:40.067 "unmap": true, 00:13:40.067 "write_zeroes": true, 00:13:40.067 "flush": true, 00:13:40.067 "reset": true, 00:13:40.067 "compare": false, 00:13:40.067 "compare_and_write": false, 00:13:40.067 "abort": true, 00:13:40.067 "nvme_admin": false, 00:13:40.067 "nvme_io": false 00:13:40.067 }, 00:13:40.067 "memory_domains": [ 00:13:40.067 { 00:13:40.067 "dma_device_id": "system", 00:13:40.067 "dma_device_type": 1 00:13:40.067 }, 00:13:40.068 { 00:13:40.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.068 "dma_device_type": 2 00:13:40.068 } 00:13:40.068 ], 00:13:40.068 "driver_specific": {} 00:13:40.068 }' 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.068 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:40.325 18:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.584 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.584 "name": "BaseBdev2", 00:13:40.584 "aliases": [ 00:13:40.584 "a4249424-a145-4f88-b3f9-ea6749ecb665" 00:13:40.584 ], 00:13:40.584 "product_name": "Malloc disk", 00:13:40.584 "block_size": 512, 00:13:40.584 "num_blocks": 65536, 00:13:40.584 "uuid": "a4249424-a145-4f88-b3f9-ea6749ecb665", 00:13:40.584 "assigned_rate_limits": { 00:13:40.584 "rw_ios_per_sec": 0, 00:13:40.584 "rw_mbytes_per_sec": 0, 00:13:40.584 "r_mbytes_per_sec": 0, 00:13:40.584 "w_mbytes_per_sec": 0 00:13:40.584 }, 00:13:40.584 "claimed": true, 00:13:40.584 "claim_type": "exclusive_write", 00:13:40.584 "zoned": false, 00:13:40.584 "supported_io_types": { 00:13:40.584 "read": true, 00:13:40.584 "write": true, 00:13:40.584 "unmap": true, 00:13:40.584 "write_zeroes": true, 00:13:40.584 "flush": true, 00:13:40.584 "reset": true, 00:13:40.584 "compare": false, 00:13:40.584 "compare_and_write": false, 00:13:40.584 "abort": true, 00:13:40.584 "nvme_admin": false, 00:13:40.584 "nvme_io": false 00:13:40.584 }, 00:13:40.584 "memory_domains": [ 00:13:40.584 { 00:13:40.584 "dma_device_id": "system", 00:13:40.584 "dma_device_type": 1 00:13:40.584 }, 00:13:40.584 { 00:13:40.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.584 "dma_device_type": 2 00:13:40.584 } 00:13:40.584 ], 00:13:40.584 "driver_specific": {} 00:13:40.584 }' 00:13:40.584 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.584 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.584 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.584 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.584 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.842 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:41.100 [2024-06-10 18:57:55.724470] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.100 [2024-06-10 18:57:55.724494] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.100 [2024-06-10 18:57:55.724532] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.100 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.359 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.359 "name": "Existed_Raid", 00:13:41.359 "uuid": "bf05c673-45d9-448d-b9c9-57a83e715a6c", 00:13:41.359 "strip_size_kb": 64, 00:13:41.359 "state": "offline", 00:13:41.359 "raid_level": "concat", 00:13:41.359 "superblock": true, 00:13:41.359 "num_base_bdevs": 2, 00:13:41.359 "num_base_bdevs_discovered": 1, 00:13:41.359 "num_base_bdevs_operational": 1, 00:13:41.359 "base_bdevs_list": [ 00:13:41.359 { 00:13:41.359 "name": null, 00:13:41.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.359 "is_configured": false, 00:13:41.359 "data_offset": 2048, 00:13:41.359 "data_size": 63488 00:13:41.359 }, 00:13:41.359 { 00:13:41.359 "name": "BaseBdev2", 00:13:41.359 "uuid": "a4249424-a145-4f88-b3f9-ea6749ecb665", 00:13:41.359 "is_configured": true, 00:13:41.359 "data_offset": 2048, 00:13:41.359 "data_size": 63488 00:13:41.359 } 00:13:41.359 ] 00:13:41.359 }' 00:13:41.359 18:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.359 18:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.925 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:41.925 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:41.925 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.925 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:42.183 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:42.183 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.183 18:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:42.441 [2024-06-10 18:57:56.980824] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.441 [2024-06-10 18:57:56.980868] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbe25f0 name Existed_Raid, state offline 00:13:42.441 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:42.441 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:42.441 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.441 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1627836 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1627836 ']' 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1627836 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1627836 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1627836' 00:13:42.700 killing process with pid 1627836 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1627836 00:13:42.700 [2024-06-10 18:57:57.301900] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.700 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1627836 00:13:42.700 [2024-06-10 18:57:57.302751] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.958 18:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:42.958 00:13:42.958 real 0m9.964s 00:13:42.958 user 0m17.751s 00:13:42.958 sys 0m1.832s 00:13:42.958 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:42.958 18:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.958 ************************************ 00:13:42.958 END TEST raid_state_function_test_sb 00:13:42.958 ************************************ 00:13:42.958 18:57:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:42.958 18:57:57 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:42.958 18:57:57 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:42.958 18:57:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.958 ************************************ 00:13:42.958 START TEST raid_superblock_test 00:13:42.958 ************************************ 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 2 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:42.958 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1629679 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1629679 /var/tmp/spdk-raid.sock 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1629679 ']' 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:42.959 18:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.959 [2024-06-10 18:57:57.637684] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:42.959 [2024-06-10 18:57:57.637740] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629679 ] 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:42.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.959 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:43.217 [2024-06-10 18:57:57.769423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.217 [2024-06-10 18:57:57.857426] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.217 [2024-06-10 18:57:57.915591] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.217 [2024-06-10 18:57:57.915620] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:43.783 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:44.099 malloc1 00:13:44.100 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:44.357 [2024-06-10 18:57:58.979266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:44.357 [2024-06-10 18:57:58.979312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.357 [2024-06-10 18:57:58.979332] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b9b70 00:13:44.357 [2024-06-10 18:57:58.979344] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.357 [2024-06-10 18:57:58.980873] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.357 [2024-06-10 18:57:58.980899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:44.357 pt1 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:44.357 18:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:44.615 malloc2 00:13:44.615 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.873 [2024-06-10 18:57:59.441101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.873 [2024-06-10 18:57:59.441147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.873 [2024-06-10 18:57:59.441162] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13baf70 00:13:44.873 [2024-06-10 18:57:59.441174] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.873 [2024-06-10 18:57:59.442595] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.873 [2024-06-10 18:57:59.442623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.873 pt2 00:13:44.873 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:44.873 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:44.873 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:13:45.130 [2024-06-10 18:57:59.665716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:45.130 [2024-06-10 18:57:59.666925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.130 [2024-06-10 18:57:59.667054] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x155d870 00:13:45.130 [2024-06-10 18:57:59.667066] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:45.130 [2024-06-10 18:57:59.667242] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1553170 00:13:45.130 [2024-06-10 18:57:59.667366] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x155d870 00:13:45.130 [2024-06-10 18:57:59.667376] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x155d870 00:13:45.130 [2024-06-10 18:57:59.667462] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.131 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.389 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.389 "name": "raid_bdev1", 00:13:45.389 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:45.389 "strip_size_kb": 64, 00:13:45.389 "state": "online", 00:13:45.389 "raid_level": "concat", 00:13:45.389 "superblock": true, 00:13:45.389 "num_base_bdevs": 2, 00:13:45.389 "num_base_bdevs_discovered": 2, 00:13:45.389 "num_base_bdevs_operational": 2, 00:13:45.389 "base_bdevs_list": [ 00:13:45.389 { 00:13:45.389 "name": "pt1", 00:13:45.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.389 "is_configured": true, 00:13:45.389 "data_offset": 2048, 00:13:45.389 "data_size": 63488 00:13:45.389 }, 00:13:45.389 { 00:13:45.389 "name": "pt2", 00:13:45.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.389 "is_configured": true, 00:13:45.389 "data_offset": 2048, 00:13:45.389 "data_size": 63488 00:13:45.389 } 00:13:45.389 ] 00:13:45.389 }' 00:13:45.389 18:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.389 18:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:45.956 [2024-06-10 18:58:00.676529] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:45.956 "name": "raid_bdev1", 00:13:45.956 "aliases": [ 00:13:45.956 "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc" 00:13:45.956 ], 00:13:45.956 "product_name": "Raid Volume", 00:13:45.956 "block_size": 512, 00:13:45.956 "num_blocks": 126976, 00:13:45.956 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:45.956 "assigned_rate_limits": { 00:13:45.956 "rw_ios_per_sec": 0, 00:13:45.956 "rw_mbytes_per_sec": 0, 00:13:45.956 "r_mbytes_per_sec": 0, 00:13:45.956 "w_mbytes_per_sec": 0 00:13:45.956 }, 00:13:45.956 "claimed": false, 00:13:45.956 "zoned": false, 00:13:45.956 "supported_io_types": { 00:13:45.956 "read": true, 00:13:45.956 "write": true, 00:13:45.956 "unmap": true, 00:13:45.956 "write_zeroes": true, 00:13:45.956 "flush": true, 00:13:45.956 "reset": true, 00:13:45.956 "compare": false, 00:13:45.956 "compare_and_write": false, 00:13:45.956 "abort": false, 00:13:45.956 "nvme_admin": false, 00:13:45.956 "nvme_io": false 00:13:45.956 }, 00:13:45.956 "memory_domains": [ 00:13:45.956 { 00:13:45.956 "dma_device_id": "system", 00:13:45.956 "dma_device_type": 1 00:13:45.956 }, 00:13:45.956 { 00:13:45.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.956 "dma_device_type": 2 00:13:45.956 }, 00:13:45.956 { 00:13:45.956 "dma_device_id": "system", 00:13:45.956 "dma_device_type": 1 00:13:45.956 }, 00:13:45.956 { 00:13:45.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.956 "dma_device_type": 2 00:13:45.956 } 00:13:45.956 ], 00:13:45.956 "driver_specific": { 00:13:45.956 "raid": { 00:13:45.956 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:45.956 "strip_size_kb": 64, 00:13:45.956 "state": "online", 00:13:45.956 "raid_level": "concat", 00:13:45.956 "superblock": true, 00:13:45.956 "num_base_bdevs": 2, 00:13:45.956 "num_base_bdevs_discovered": 2, 00:13:45.956 "num_base_bdevs_operational": 2, 00:13:45.956 "base_bdevs_list": [ 00:13:45.956 { 00:13:45.956 "name": "pt1", 00:13:45.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.956 "is_configured": true, 00:13:45.956 "data_offset": 2048, 00:13:45.956 "data_size": 63488 00:13:45.956 }, 00:13:45.956 { 00:13:45.956 "name": "pt2", 00:13:45.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.956 "is_configured": true, 00:13:45.956 "data_offset": 2048, 00:13:45.956 "data_size": 63488 00:13:45.956 } 00:13:45.956 ] 00:13:45.956 } 00:13:45.956 } 00:13:45.956 }' 00:13:45.956 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.215 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:46.215 pt2' 00:13:46.215 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.215 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:46.215 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.215 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.216 "name": "pt1", 00:13:46.216 "aliases": [ 00:13:46.216 "00000000-0000-0000-0000-000000000001" 00:13:46.216 ], 00:13:46.216 "product_name": "passthru", 00:13:46.216 "block_size": 512, 00:13:46.216 "num_blocks": 65536, 00:13:46.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.216 "assigned_rate_limits": { 00:13:46.216 "rw_ios_per_sec": 0, 00:13:46.216 "rw_mbytes_per_sec": 0, 00:13:46.216 "r_mbytes_per_sec": 0, 00:13:46.216 "w_mbytes_per_sec": 0 00:13:46.216 }, 00:13:46.216 "claimed": true, 00:13:46.216 "claim_type": "exclusive_write", 00:13:46.216 "zoned": false, 00:13:46.216 "supported_io_types": { 00:13:46.216 "read": true, 00:13:46.216 "write": true, 00:13:46.216 "unmap": true, 00:13:46.216 "write_zeroes": true, 00:13:46.216 "flush": true, 00:13:46.216 "reset": true, 00:13:46.216 "compare": false, 00:13:46.216 "compare_and_write": false, 00:13:46.216 "abort": true, 00:13:46.216 "nvme_admin": false, 00:13:46.216 "nvme_io": false 00:13:46.216 }, 00:13:46.216 "memory_domains": [ 00:13:46.216 { 00:13:46.216 "dma_device_id": "system", 00:13:46.216 "dma_device_type": 1 00:13:46.216 }, 00:13:46.216 { 00:13:46.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.216 "dma_device_type": 2 00:13:46.216 } 00:13:46.216 ], 00:13:46.216 "driver_specific": { 00:13:46.216 "passthru": { 00:13:46.216 "name": "pt1", 00:13:46.216 "base_bdev_name": "malloc1" 00:13:46.216 } 00:13:46.216 } 00:13:46.216 }' 00:13:46.474 18:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:46.474 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.732 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.732 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:46.732 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.732 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:46.732 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.990 "name": "pt2", 00:13:46.990 "aliases": [ 00:13:46.990 "00000000-0000-0000-0000-000000000002" 00:13:46.990 ], 00:13:46.990 "product_name": "passthru", 00:13:46.990 "block_size": 512, 00:13:46.990 "num_blocks": 65536, 00:13:46.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.990 "assigned_rate_limits": { 00:13:46.990 "rw_ios_per_sec": 0, 00:13:46.990 "rw_mbytes_per_sec": 0, 00:13:46.990 "r_mbytes_per_sec": 0, 00:13:46.990 "w_mbytes_per_sec": 0 00:13:46.990 }, 00:13:46.990 "claimed": true, 00:13:46.990 "claim_type": "exclusive_write", 00:13:46.990 "zoned": false, 00:13:46.990 "supported_io_types": { 00:13:46.990 "read": true, 00:13:46.990 "write": true, 00:13:46.990 "unmap": true, 00:13:46.990 "write_zeroes": true, 00:13:46.990 "flush": true, 00:13:46.990 "reset": true, 00:13:46.990 "compare": false, 00:13:46.990 "compare_and_write": false, 00:13:46.990 "abort": true, 00:13:46.990 "nvme_admin": false, 00:13:46.990 "nvme_io": false 00:13:46.990 }, 00:13:46.990 "memory_domains": [ 00:13:46.990 { 00:13:46.990 "dma_device_id": "system", 00:13:46.990 "dma_device_type": 1 00:13:46.990 }, 00:13:46.990 { 00:13:46.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.990 "dma_device_type": 2 00:13:46.990 } 00:13:46.990 ], 00:13:46.990 "driver_specific": { 00:13:46.990 "passthru": { 00:13:46.990 "name": "pt2", 00:13:46.990 "base_bdev_name": "malloc2" 00:13:46.990 } 00:13:46.990 } 00:13:46.990 }' 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.990 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:47.248 18:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:47.507 [2024-06-10 18:58:02.092290] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.507 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8a6a3fe7-ad37-4caa-b416-eddc9764a6cc 00:13:47.507 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8a6a3fe7-ad37-4caa-b416-eddc9764a6cc ']' 00:13:47.507 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:47.764 [2024-06-10 18:58:02.324712] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.764 [2024-06-10 18:58:02.324728] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.764 [2024-06-10 18:58:02.324777] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.764 [2024-06-10 18:58:02.324815] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.764 [2024-06-10 18:58:02.324826] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x155d870 name raid_bdev1, state offline 00:13:47.764 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.764 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:48.022 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:48.022 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:48.022 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:48.022 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:48.280 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:48.280 18:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:48.280 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:48.280 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:48.538 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:48.796 [2024-06-10 18:58:03.447637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:48.796 [2024-06-10 18:58:03.448887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:48.796 [2024-06-10 18:58:03.448937] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:48.796 [2024-06-10 18:58:03.448974] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:48.796 [2024-06-10 18:58:03.448992] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.796 [2024-06-10 18:58:03.449000] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13ba010 name raid_bdev1, state configuring 00:13:48.796 request: 00:13:48.796 { 00:13:48.796 "name": "raid_bdev1", 00:13:48.796 "raid_level": "concat", 00:13:48.796 "base_bdevs": [ 00:13:48.796 "malloc1", 00:13:48.796 "malloc2" 00:13:48.796 ], 00:13:48.796 "superblock": false, 00:13:48.796 "strip_size_kb": 64, 00:13:48.796 "method": "bdev_raid_create", 00:13:48.796 "req_id": 1 00:13:48.796 } 00:13:48.796 Got JSON-RPC error response 00:13:48.796 response: 00:13:48.796 { 00:13:48.796 "code": -17, 00:13:48.796 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:48.796 } 00:13:48.796 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:13:48.796 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:48.796 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:48.796 18:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:48.796 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.796 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:49.055 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:49.055 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:49.055 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:49.312 [2024-06-10 18:58:03.904775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:49.312 [2024-06-10 18:58:03.904815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.312 [2024-06-10 18:58:03.904831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x155d5f0 00:13:49.312 [2024-06-10 18:58:03.904843] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.312 [2024-06-10 18:58:03.906325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.312 [2024-06-10 18:58:03.906352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:49.312 [2024-06-10 18:58:03.906417] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:49.312 [2024-06-10 18:58:03.906439] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:49.312 pt1 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.312 18:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.570 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.570 "name": "raid_bdev1", 00:13:49.570 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:49.570 "strip_size_kb": 64, 00:13:49.570 "state": "configuring", 00:13:49.570 "raid_level": "concat", 00:13:49.570 "superblock": true, 00:13:49.570 "num_base_bdevs": 2, 00:13:49.570 "num_base_bdevs_discovered": 1, 00:13:49.570 "num_base_bdevs_operational": 2, 00:13:49.570 "base_bdevs_list": [ 00:13:49.570 { 00:13:49.570 "name": "pt1", 00:13:49.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.570 "is_configured": true, 00:13:49.570 "data_offset": 2048, 00:13:49.570 "data_size": 63488 00:13:49.570 }, 00:13:49.570 { 00:13:49.570 "name": null, 00:13:49.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.570 "is_configured": false, 00:13:49.570 "data_offset": 2048, 00:13:49.570 "data_size": 63488 00:13:49.570 } 00:13:49.570 ] 00:13:49.570 }' 00:13:49.570 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.570 18:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.135 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:13:50.135 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:50.135 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:50.135 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:50.393 [2024-06-10 18:58:04.939508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:50.393 [2024-06-10 18:58:04.939550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.393 [2024-06-10 18:58:04.939567] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1553dc0 00:13:50.393 [2024-06-10 18:58:04.939586] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.393 [2024-06-10 18:58:04.939894] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.393 [2024-06-10 18:58:04.939910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:50.393 [2024-06-10 18:58:04.939964] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:50.393 [2024-06-10 18:58:04.939981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:50.393 [2024-06-10 18:58:04.940064] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1552af0 00:13:50.393 [2024-06-10 18:58:04.940073] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:50.393 [2024-06-10 18:58:04.940221] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15546f0 00:13:50.393 [2024-06-10 18:58:04.940327] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1552af0 00:13:50.393 [2024-06-10 18:58:04.940335] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1552af0 00:13:50.393 [2024-06-10 18:58:04.940421] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.393 pt2 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.393 18:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.651 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.651 "name": "raid_bdev1", 00:13:50.651 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:50.651 "strip_size_kb": 64, 00:13:50.651 "state": "online", 00:13:50.651 "raid_level": "concat", 00:13:50.651 "superblock": true, 00:13:50.651 "num_base_bdevs": 2, 00:13:50.651 "num_base_bdevs_discovered": 2, 00:13:50.651 "num_base_bdevs_operational": 2, 00:13:50.651 "base_bdevs_list": [ 00:13:50.652 { 00:13:50.652 "name": "pt1", 00:13:50.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.652 "is_configured": true, 00:13:50.652 "data_offset": 2048, 00:13:50.652 "data_size": 63488 00:13:50.652 }, 00:13:50.652 { 00:13:50.652 "name": "pt2", 00:13:50.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.652 "is_configured": true, 00:13:50.652 "data_offset": 2048, 00:13:50.652 "data_size": 63488 00:13:50.652 } 00:13:50.652 ] 00:13:50.652 }' 00:13:50.652 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.652 18:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:51.216 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:51.216 [2024-06-10 18:58:05.966571] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.474 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:51.474 "name": "raid_bdev1", 00:13:51.474 "aliases": [ 00:13:51.474 "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc" 00:13:51.474 ], 00:13:51.474 "product_name": "Raid Volume", 00:13:51.474 "block_size": 512, 00:13:51.474 "num_blocks": 126976, 00:13:51.474 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:51.474 "assigned_rate_limits": { 00:13:51.474 "rw_ios_per_sec": 0, 00:13:51.474 "rw_mbytes_per_sec": 0, 00:13:51.474 "r_mbytes_per_sec": 0, 00:13:51.474 "w_mbytes_per_sec": 0 00:13:51.474 }, 00:13:51.474 "claimed": false, 00:13:51.474 "zoned": false, 00:13:51.474 "supported_io_types": { 00:13:51.474 "read": true, 00:13:51.474 "write": true, 00:13:51.474 "unmap": true, 00:13:51.474 "write_zeroes": true, 00:13:51.474 "flush": true, 00:13:51.474 "reset": true, 00:13:51.474 "compare": false, 00:13:51.474 "compare_and_write": false, 00:13:51.474 "abort": false, 00:13:51.474 "nvme_admin": false, 00:13:51.474 "nvme_io": false 00:13:51.474 }, 00:13:51.474 "memory_domains": [ 00:13:51.474 { 00:13:51.474 "dma_device_id": "system", 00:13:51.474 "dma_device_type": 1 00:13:51.474 }, 00:13:51.474 { 00:13:51.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.474 "dma_device_type": 2 00:13:51.474 }, 00:13:51.474 { 00:13:51.474 "dma_device_id": "system", 00:13:51.474 "dma_device_type": 1 00:13:51.474 }, 00:13:51.474 { 00:13:51.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.474 "dma_device_type": 2 00:13:51.474 } 00:13:51.474 ], 00:13:51.474 "driver_specific": { 00:13:51.474 "raid": { 00:13:51.474 "uuid": "8a6a3fe7-ad37-4caa-b416-eddc9764a6cc", 00:13:51.474 "strip_size_kb": 64, 00:13:51.474 "state": "online", 00:13:51.475 "raid_level": "concat", 00:13:51.475 "superblock": true, 00:13:51.475 "num_base_bdevs": 2, 00:13:51.475 "num_base_bdevs_discovered": 2, 00:13:51.475 "num_base_bdevs_operational": 2, 00:13:51.475 "base_bdevs_list": [ 00:13:51.475 { 00:13:51.475 "name": "pt1", 00:13:51.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:51.475 "is_configured": true, 00:13:51.475 "data_offset": 2048, 00:13:51.475 "data_size": 63488 00:13:51.475 }, 00:13:51.475 { 00:13:51.475 "name": "pt2", 00:13:51.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.475 "is_configured": true, 00:13:51.475 "data_offset": 2048, 00:13:51.475 "data_size": 63488 00:13:51.475 } 00:13:51.475 ] 00:13:51.475 } 00:13:51.475 } 00:13:51.475 }' 00:13:51.475 18:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.475 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:51.475 pt2' 00:13:51.475 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.475 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:51.475 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.732 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.732 "name": "pt1", 00:13:51.732 "aliases": [ 00:13:51.732 "00000000-0000-0000-0000-000000000001" 00:13:51.732 ], 00:13:51.732 "product_name": "passthru", 00:13:51.732 "block_size": 512, 00:13:51.732 "num_blocks": 65536, 00:13:51.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:51.732 "assigned_rate_limits": { 00:13:51.733 "rw_ios_per_sec": 0, 00:13:51.733 "rw_mbytes_per_sec": 0, 00:13:51.733 "r_mbytes_per_sec": 0, 00:13:51.733 "w_mbytes_per_sec": 0 00:13:51.733 }, 00:13:51.733 "claimed": true, 00:13:51.733 "claim_type": "exclusive_write", 00:13:51.733 "zoned": false, 00:13:51.733 "supported_io_types": { 00:13:51.733 "read": true, 00:13:51.733 "write": true, 00:13:51.733 "unmap": true, 00:13:51.733 "write_zeroes": true, 00:13:51.733 "flush": true, 00:13:51.733 "reset": true, 00:13:51.733 "compare": false, 00:13:51.733 "compare_and_write": false, 00:13:51.733 "abort": true, 00:13:51.733 "nvme_admin": false, 00:13:51.733 "nvme_io": false 00:13:51.733 }, 00:13:51.733 "memory_domains": [ 00:13:51.733 { 00:13:51.733 "dma_device_id": "system", 00:13:51.733 "dma_device_type": 1 00:13:51.733 }, 00:13:51.733 { 00:13:51.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.733 "dma_device_type": 2 00:13:51.733 } 00:13:51.733 ], 00:13:51.733 "driver_specific": { 00:13:51.733 "passthru": { 00:13:51.733 "name": "pt1", 00:13:51.733 "base_bdev_name": "malloc1" 00:13:51.733 } 00:13:51.733 } 00:13:51.733 }' 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.733 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:51.991 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.248 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.248 "name": "pt2", 00:13:52.248 "aliases": [ 00:13:52.248 "00000000-0000-0000-0000-000000000002" 00:13:52.248 ], 00:13:52.248 "product_name": "passthru", 00:13:52.248 "block_size": 512, 00:13:52.248 "num_blocks": 65536, 00:13:52.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.248 "assigned_rate_limits": { 00:13:52.248 "rw_ios_per_sec": 0, 00:13:52.248 "rw_mbytes_per_sec": 0, 00:13:52.248 "r_mbytes_per_sec": 0, 00:13:52.248 "w_mbytes_per_sec": 0 00:13:52.248 }, 00:13:52.248 "claimed": true, 00:13:52.248 "claim_type": "exclusive_write", 00:13:52.248 "zoned": false, 00:13:52.249 "supported_io_types": { 00:13:52.249 "read": true, 00:13:52.249 "write": true, 00:13:52.249 "unmap": true, 00:13:52.249 "write_zeroes": true, 00:13:52.249 "flush": true, 00:13:52.249 "reset": true, 00:13:52.249 "compare": false, 00:13:52.249 "compare_and_write": false, 00:13:52.249 "abort": true, 00:13:52.249 "nvme_admin": false, 00:13:52.249 "nvme_io": false 00:13:52.249 }, 00:13:52.249 "memory_domains": [ 00:13:52.249 { 00:13:52.249 "dma_device_id": "system", 00:13:52.249 "dma_device_type": 1 00:13:52.249 }, 00:13:52.249 { 00:13:52.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.249 "dma_device_type": 2 00:13:52.249 } 00:13:52.249 ], 00:13:52.249 "driver_specific": { 00:13:52.249 "passthru": { 00:13:52.249 "name": "pt2", 00:13:52.249 "base_bdev_name": "malloc2" 00:13:52.249 } 00:13:52.249 } 00:13:52.249 }' 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.249 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:52.506 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:52.764 [2024-06-10 18:58:07.354222] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8a6a3fe7-ad37-4caa-b416-eddc9764a6cc '!=' 8a6a3fe7-ad37-4caa-b416-eddc9764a6cc ']' 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1629679 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1629679 ']' 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1629679 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1629679 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1629679' 00:13:52.764 killing process with pid 1629679 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1629679 00:13:52.764 [2024-06-10 18:58:07.431153] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.764 [2024-06-10 18:58:07.431199] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.764 [2024-06-10 18:58:07.431239] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.764 [2024-06-10 18:58:07.431250] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1552af0 name raid_bdev1, state offline 00:13:52.764 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1629679 00:13:52.764 [2024-06-10 18:58:07.447062] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.022 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:53.022 00:13:53.022 real 0m10.059s 00:13:53.022 user 0m17.938s 00:13:53.022 sys 0m1.843s 00:13:53.022 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:53.022 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.022 ************************************ 00:13:53.022 END TEST raid_superblock_test 00:13:53.022 ************************************ 00:13:53.022 18:58:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:13:53.022 18:58:07 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:53.022 18:58:07 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:53.022 18:58:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.022 ************************************ 00:13:53.022 START TEST raid_read_error_test 00:13:53.022 ************************************ 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 2 read 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.pKA3g5fpMc 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1631755 00:13:53.022 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1631755 /var/tmp/spdk-raid.sock 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1631755 ']' 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:53.023 18:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.281 [2024-06-10 18:58:07.797012] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:53.281 [2024-06-10 18:58:07.797071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631755 ] 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:53.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:53.281 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:53.281 [2024-06-10 18:58:07.931812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.281 [2024-06-10 18:58:08.019020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.539 [2024-06-10 18:58:08.073930] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.539 [2024-06-10 18:58:08.073957] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.104 18:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:54.104 18:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:13:54.104 18:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.104 18:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.362 BaseBdev1_malloc 00:13:54.362 18:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:54.619 true 00:13:54.619 18:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:54.619 [2024-06-10 18:58:09.354244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:54.620 [2024-06-10 18:58:09.354283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.620 [2024-06-10 18:58:09.354301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed9d50 00:13:54.620 [2024-06-10 18:58:09.354313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.620 [2024-06-10 18:58:09.355935] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.620 [2024-06-10 18:58:09.355963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.620 BaseBdev1 00:13:54.620 18:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.620 18:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.878 BaseBdev2_malloc 00:13:54.878 18:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:55.136 true 00:13:55.136 18:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:55.395 [2024-06-10 18:58:10.032436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:55.395 [2024-06-10 18:58:10.032478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.395 [2024-06-10 18:58:10.032497] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1edf2e0 00:13:55.395 [2024-06-10 18:58:10.032510] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.395 [2024-06-10 18:58:10.033919] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.395 [2024-06-10 18:58:10.033950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.395 BaseBdev2 00:13:55.395 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:55.654 [2024-06-10 18:58:10.257077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.654 [2024-06-10 18:58:10.258296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.654 [2024-06-10 18:58:10.258479] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ee0640 00:13:55.654 [2024-06-10 18:58:10.258496] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:55.654 [2024-06-10 18:58:10.258685] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d34cc0 00:13:55.654 [2024-06-10 18:58:10.258821] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ee0640 00:13:55.654 [2024-06-10 18:58:10.258830] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ee0640 00:13:55.654 [2024-06-10 18:58:10.258927] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.654 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.913 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.913 "name": "raid_bdev1", 00:13:55.913 "uuid": "7493b7fa-dcfc-475f-b955-475c01fb966d", 00:13:55.913 "strip_size_kb": 64, 00:13:55.913 "state": "online", 00:13:55.913 "raid_level": "concat", 00:13:55.913 "superblock": true, 00:13:55.913 "num_base_bdevs": 2, 00:13:55.913 "num_base_bdevs_discovered": 2, 00:13:55.913 "num_base_bdevs_operational": 2, 00:13:55.913 "base_bdevs_list": [ 00:13:55.913 { 00:13:55.913 "name": "BaseBdev1", 00:13:55.913 "uuid": "690d2853-0e0e-5da7-b6d6-f6a956600699", 00:13:55.913 "is_configured": true, 00:13:55.913 "data_offset": 2048, 00:13:55.913 "data_size": 63488 00:13:55.913 }, 00:13:55.913 { 00:13:55.913 "name": "BaseBdev2", 00:13:55.913 "uuid": "97b00163-e389-539a-8b1e-cab70d17c0da", 00:13:55.913 "is_configured": true, 00:13:55.913 "data_offset": 2048, 00:13:55.913 "data_size": 63488 00:13:55.913 } 00:13:55.913 ] 00:13:55.913 }' 00:13:55.913 18:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.913 18:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.478 18:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:56.478 18:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:56.478 [2024-06-10 18:58:11.147627] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d353a0 00:13:57.411 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.670 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.928 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.928 "name": "raid_bdev1", 00:13:57.928 "uuid": "7493b7fa-dcfc-475f-b955-475c01fb966d", 00:13:57.928 "strip_size_kb": 64, 00:13:57.928 "state": "online", 00:13:57.928 "raid_level": "concat", 00:13:57.928 "superblock": true, 00:13:57.928 "num_base_bdevs": 2, 00:13:57.928 "num_base_bdevs_discovered": 2, 00:13:57.928 "num_base_bdevs_operational": 2, 00:13:57.928 "base_bdevs_list": [ 00:13:57.928 { 00:13:57.928 "name": "BaseBdev1", 00:13:57.928 "uuid": "690d2853-0e0e-5da7-b6d6-f6a956600699", 00:13:57.928 "is_configured": true, 00:13:57.928 "data_offset": 2048, 00:13:57.928 "data_size": 63488 00:13:57.928 }, 00:13:57.928 { 00:13:57.928 "name": "BaseBdev2", 00:13:57.928 "uuid": "97b00163-e389-539a-8b1e-cab70d17c0da", 00:13:57.928 "is_configured": true, 00:13:57.928 "data_offset": 2048, 00:13:57.928 "data_size": 63488 00:13:57.928 } 00:13:57.928 ] 00:13:57.928 }' 00:13:57.928 18:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.928 18:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.494 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:58.494 [2024-06-10 18:58:13.232548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.494 [2024-06-10 18:58:13.232597] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.494 [2024-06-10 18:58:13.235506] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.494 [2024-06-10 18:58:13.235535] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.494 [2024-06-10 18:58:13.235558] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.494 [2024-06-10 18:58:13.235568] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ee0640 name raid_bdev1, state offline 00:13:58.494 0 00:13:58.494 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1631755 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1631755 ']' 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1631755 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1631755 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1631755' 00:13:58.752 killing process with pid 1631755 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1631755 00:13:58.752 [2024-06-10 18:58:13.309433] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.752 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1631755 00:13:58.752 [2024-06-10 18:58:13.318983] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.pKA3g5fpMc 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:13:59.011 00:13:59.011 real 0m5.804s 00:13:59.011 user 0m8.997s 00:13:59.011 sys 0m1.016s 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:59.011 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.011 ************************************ 00:13:59.011 END TEST raid_read_error_test 00:13:59.011 ************************************ 00:13:59.011 18:58:13 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:13:59.011 18:58:13 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:59.011 18:58:13 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:59.011 18:58:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.011 ************************************ 00:13:59.011 START TEST raid_write_error_test 00:13:59.011 ************************************ 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 2 write 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QjUuat3Gkv 00:13:59.011 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1632879 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1632879 /var/tmp/spdk-raid.sock 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1632879 ']' 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:59.012 18:58:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.012 [2024-06-10 18:58:13.686471] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:13:59.012 [2024-06-10 18:58:13.686532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632879 ] 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.0 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.1 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.2 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.3 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.4 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.5 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.6 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:01.7 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.0 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.1 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.2 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.3 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.4 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.5 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.6 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b6:02.7 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.0 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.1 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.2 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.3 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.4 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.5 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.6 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:01.7 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.0 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.1 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.2 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.3 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.4 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.5 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.6 cannot be used 00:13:59.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.012 EAL: Requested device 0000:b8:02.7 cannot be used 00:13:59.271 [2024-06-10 18:58:13.819377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.271 [2024-06-10 18:58:13.905336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.271 [2024-06-10 18:58:13.966204] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.271 [2024-06-10 18:58:13.966245] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.835 18:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:59.835 18:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:13:59.835 18:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:59.835 18:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.092 BaseBdev1_malloc 00:14:00.092 18:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:00.352 true 00:14:00.352 18:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:00.610 [2024-06-10 18:58:15.243341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:00.610 [2024-06-10 18:58:15.243381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.610 [2024-06-10 18:58:15.243399] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb2d50 00:14:00.610 [2024-06-10 18:58:15.243411] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.610 [2024-06-10 18:58:15.244995] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.610 [2024-06-10 18:58:15.245022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.610 BaseBdev1 00:14:00.610 18:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:00.610 18:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.869 BaseBdev2_malloc 00:14:00.869 18:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:01.127 true 00:14:01.127 18:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:01.385 [2024-06-10 18:58:15.889243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:01.385 [2024-06-10 18:58:15.889280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.385 [2024-06-10 18:58:15.889297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb82e0 00:14:01.385 [2024-06-10 18:58:15.889309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.385 [2024-06-10 18:58:15.890624] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.385 [2024-06-10 18:58:15.890650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:01.385 BaseBdev2 00:14:01.385 18:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:01.385 [2024-06-10 18:58:16.113864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.385 [2024-06-10 18:58:16.115032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.385 [2024-06-10 18:58:16.115208] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfb9640 00:14:01.385 [2024-06-10 18:58:16.115221] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:01.385 [2024-06-10 18:58:16.115394] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe0dcc0 00:14:01.385 [2024-06-10 18:58:16.115525] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfb9640 00:14:01.385 [2024-06-10 18:58:16.115534] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfb9640 00:14:01.385 [2024-06-10 18:58:16.115633] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.385 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.643 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.643 "name": "raid_bdev1", 00:14:01.643 "uuid": "78b4563b-75d2-4cea-85a8-646bcaa7a8c6", 00:14:01.643 "strip_size_kb": 64, 00:14:01.643 "state": "online", 00:14:01.643 "raid_level": "concat", 00:14:01.643 "superblock": true, 00:14:01.643 "num_base_bdevs": 2, 00:14:01.643 "num_base_bdevs_discovered": 2, 00:14:01.643 "num_base_bdevs_operational": 2, 00:14:01.643 "base_bdevs_list": [ 00:14:01.643 { 00:14:01.643 "name": "BaseBdev1", 00:14:01.643 "uuid": "47d6491c-f4d2-5a4c-b004-78643c7c94c5", 00:14:01.643 "is_configured": true, 00:14:01.643 "data_offset": 2048, 00:14:01.643 "data_size": 63488 00:14:01.643 }, 00:14:01.643 { 00:14:01.643 "name": "BaseBdev2", 00:14:01.643 "uuid": "a946ad88-392d-5086-983a-93cb0816d8c4", 00:14:01.643 "is_configured": true, 00:14:01.643 "data_offset": 2048, 00:14:01.643 "data_size": 63488 00:14:01.643 } 00:14:01.643 ] 00:14:01.643 }' 00:14:01.643 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.643 18:58:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.209 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:02.209 18:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:02.467 [2024-06-10 18:58:17.008412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe0e3a0 00:14:03.402 18:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.402 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.661 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.661 "name": "raid_bdev1", 00:14:03.661 "uuid": "78b4563b-75d2-4cea-85a8-646bcaa7a8c6", 00:14:03.661 "strip_size_kb": 64, 00:14:03.661 "state": "online", 00:14:03.661 "raid_level": "concat", 00:14:03.661 "superblock": true, 00:14:03.661 "num_base_bdevs": 2, 00:14:03.661 "num_base_bdevs_discovered": 2, 00:14:03.661 "num_base_bdevs_operational": 2, 00:14:03.661 "base_bdevs_list": [ 00:14:03.661 { 00:14:03.661 "name": "BaseBdev1", 00:14:03.661 "uuid": "47d6491c-f4d2-5a4c-b004-78643c7c94c5", 00:14:03.661 "is_configured": true, 00:14:03.661 "data_offset": 2048, 00:14:03.661 "data_size": 63488 00:14:03.661 }, 00:14:03.661 { 00:14:03.661 "name": "BaseBdev2", 00:14:03.661 "uuid": "a946ad88-392d-5086-983a-93cb0816d8c4", 00:14:03.661 "is_configured": true, 00:14:03.661 "data_offset": 2048, 00:14:03.661 "data_size": 63488 00:14:03.661 } 00:14:03.661 ] 00:14:03.661 }' 00:14:03.661 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.661 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.296 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:04.560 [2024-06-10 18:58:19.077859] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.560 [2024-06-10 18:58:19.077894] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.560 [2024-06-10 18:58:19.080788] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.560 [2024-06-10 18:58:19.080817] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.560 [2024-06-10 18:58:19.080841] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.560 [2024-06-10 18:58:19.080852] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb9640 name raid_bdev1, state offline 00:14:04.560 0 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1632879 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1632879 ']' 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1632879 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1632879 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1632879' 00:14:04.560 killing process with pid 1632879 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1632879 00:14:04.560 [2024-06-10 18:58:19.155543] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.560 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1632879 00:14:04.560 [2024-06-10 18:58:19.165278] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QjUuat3Gkv 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:14:04.820 00:14:04.820 real 0m5.755s 00:14:04.820 user 0m8.931s 00:14:04.820 sys 0m0.982s 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:04.820 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.820 ************************************ 00:14:04.820 END TEST raid_write_error_test 00:14:04.820 ************************************ 00:14:04.820 18:58:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:04.820 18:58:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:04.820 18:58:19 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:14:04.820 18:58:19 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:04.820 18:58:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.820 ************************************ 00:14:04.820 START TEST raid_state_function_test 00:14:04.820 ************************************ 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 false 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1633842 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1633842' 00:14:04.820 Process raid pid: 1633842 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1633842 /var/tmp/spdk-raid.sock 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1633842 ']' 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:04.820 18:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.820 [2024-06-10 18:58:19.523746] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:14:04.820 [2024-06-10 18:58:19.523804] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.0 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.1 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.2 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.3 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.4 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.5 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.6 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:01.7 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.0 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.1 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.2 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.3 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.4 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.5 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.6 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b6:02.7 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.0 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.1 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.2 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.3 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.4 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.5 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.6 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:01.7 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.0 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.1 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.2 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.3 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.4 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.5 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.6 cannot be used 00:14:05.080 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.080 EAL: Requested device 0000:b8:02.7 cannot be used 00:14:05.080 [2024-06-10 18:58:19.657988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.080 [2024-06-10 18:58:19.744620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.080 [2024-06-10 18:58:19.803050] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.080 [2024-06-10 18:58:19.803083] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.648 18:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:05.648 18:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:14:05.648 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:05.907 [2024-06-10 18:58:20.581593] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.907 [2024-06-10 18:58:20.581633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.907 [2024-06-10 18:58:20.581643] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.907 [2024-06-10 18:58:20.581654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.907 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.908 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.166 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.166 "name": "Existed_Raid", 00:14:06.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.166 "strip_size_kb": 0, 00:14:06.166 "state": "configuring", 00:14:06.166 "raid_level": "raid1", 00:14:06.166 "superblock": false, 00:14:06.166 "num_base_bdevs": 2, 00:14:06.166 "num_base_bdevs_discovered": 0, 00:14:06.166 "num_base_bdevs_operational": 2, 00:14:06.166 "base_bdevs_list": [ 00:14:06.166 { 00:14:06.166 "name": "BaseBdev1", 00:14:06.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.166 "is_configured": false, 00:14:06.166 "data_offset": 0, 00:14:06.166 "data_size": 0 00:14:06.166 }, 00:14:06.167 { 00:14:06.167 "name": "BaseBdev2", 00:14:06.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.167 "is_configured": false, 00:14:06.167 "data_offset": 0, 00:14:06.167 "data_size": 0 00:14:06.167 } 00:14:06.167 ] 00:14:06.167 }' 00:14:06.167 18:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.167 18:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.735 18:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:06.994 [2024-06-10 18:58:21.588119] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.994 [2024-06-10 18:58:21.588147] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcc4f10 name Existed_Raid, state configuring 00:14:06.994 18:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:07.253 [2024-06-10 18:58:21.812716] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.253 [2024-06-10 18:58:21.812741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.253 [2024-06-10 18:58:21.812749] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.253 [2024-06-10 18:58:21.812760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.253 18:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:07.512 [2024-06-10 18:58:22.042877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.512 BaseBdev1 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:07.512 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:07.770 [ 00:14:07.770 { 00:14:07.770 "name": "BaseBdev1", 00:14:07.770 "aliases": [ 00:14:07.770 "69b1f32c-9b45-42df-8312-e0e47a310987" 00:14:07.770 ], 00:14:07.770 "product_name": "Malloc disk", 00:14:07.770 "block_size": 512, 00:14:07.770 "num_blocks": 65536, 00:14:07.770 "uuid": "69b1f32c-9b45-42df-8312-e0e47a310987", 00:14:07.770 "assigned_rate_limits": { 00:14:07.770 "rw_ios_per_sec": 0, 00:14:07.770 "rw_mbytes_per_sec": 0, 00:14:07.770 "r_mbytes_per_sec": 0, 00:14:07.770 "w_mbytes_per_sec": 0 00:14:07.770 }, 00:14:07.770 "claimed": true, 00:14:07.770 "claim_type": "exclusive_write", 00:14:07.770 "zoned": false, 00:14:07.770 "supported_io_types": { 00:14:07.770 "read": true, 00:14:07.770 "write": true, 00:14:07.770 "unmap": true, 00:14:07.770 "write_zeroes": true, 00:14:07.770 "flush": true, 00:14:07.770 "reset": true, 00:14:07.770 "compare": false, 00:14:07.770 "compare_and_write": false, 00:14:07.770 "abort": true, 00:14:07.770 "nvme_admin": false, 00:14:07.770 "nvme_io": false 00:14:07.770 }, 00:14:07.770 "memory_domains": [ 00:14:07.770 { 00:14:07.770 "dma_device_id": "system", 00:14:07.770 "dma_device_type": 1 00:14:07.770 }, 00:14:07.770 { 00:14:07.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.770 "dma_device_type": 2 00:14:07.770 } 00:14:07.770 ], 00:14:07.770 "driver_specific": {} 00:14:07.770 } 00:14:07.770 ] 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.770 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.029 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.029 "name": "Existed_Raid", 00:14:08.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.029 "strip_size_kb": 0, 00:14:08.029 "state": "configuring", 00:14:08.029 "raid_level": "raid1", 00:14:08.029 "superblock": false, 00:14:08.029 "num_base_bdevs": 2, 00:14:08.029 "num_base_bdevs_discovered": 1, 00:14:08.029 "num_base_bdevs_operational": 2, 00:14:08.029 "base_bdevs_list": [ 00:14:08.029 { 00:14:08.029 "name": "BaseBdev1", 00:14:08.029 "uuid": "69b1f32c-9b45-42df-8312-e0e47a310987", 00:14:08.029 "is_configured": true, 00:14:08.029 "data_offset": 0, 00:14:08.029 "data_size": 65536 00:14:08.029 }, 00:14:08.029 { 00:14:08.029 "name": "BaseBdev2", 00:14:08.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.029 "is_configured": false, 00:14:08.029 "data_offset": 0, 00:14:08.029 "data_size": 0 00:14:08.029 } 00:14:08.029 ] 00:14:08.029 }' 00:14:08.029 18:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.029 18:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.597 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:08.855 [2024-06-10 18:58:23.506715] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.855 [2024-06-10 18:58:23.506748] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcc4800 name Existed_Raid, state configuring 00:14:08.856 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:09.114 [2024-06-10 18:58:23.735330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.114 [2024-06-10 18:58:23.736679] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.114 [2024-06-10 18:58:23.736707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.114 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.373 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.373 "name": "Existed_Raid", 00:14:09.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.373 "strip_size_kb": 0, 00:14:09.373 "state": "configuring", 00:14:09.373 "raid_level": "raid1", 00:14:09.373 "superblock": false, 00:14:09.374 "num_base_bdevs": 2, 00:14:09.374 "num_base_bdevs_discovered": 1, 00:14:09.374 "num_base_bdevs_operational": 2, 00:14:09.374 "base_bdevs_list": [ 00:14:09.374 { 00:14:09.374 "name": "BaseBdev1", 00:14:09.374 "uuid": "69b1f32c-9b45-42df-8312-e0e47a310987", 00:14:09.374 "is_configured": true, 00:14:09.374 "data_offset": 0, 00:14:09.374 "data_size": 65536 00:14:09.374 }, 00:14:09.374 { 00:14:09.374 "name": "BaseBdev2", 00:14:09.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.374 "is_configured": false, 00:14:09.374 "data_offset": 0, 00:14:09.374 "data_size": 0 00:14:09.374 } 00:14:09.374 ] 00:14:09.374 }' 00:14:09.374 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.374 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.942 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.201 [2024-06-10 18:58:24.709144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.201 [2024-06-10 18:58:24.709177] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcc55f0 00:14:10.201 [2024-06-10 18:58:24.709185] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:10.201 [2024-06-10 18:58:24.709413] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe773b0 00:14:10.201 [2024-06-10 18:58:24.709520] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcc55f0 00:14:10.201 [2024-06-10 18:58:24.709529] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcc55f0 00:14:10.201 [2024-06-10 18:58:24.709682] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.201 BaseBdev2 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.201 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.460 [ 00:14:10.460 { 00:14:10.460 "name": "BaseBdev2", 00:14:10.460 "aliases": [ 00:14:10.460 "de3a38cd-029d-401f-99b3-7f3087f7772e" 00:14:10.460 ], 00:14:10.460 "product_name": "Malloc disk", 00:14:10.460 "block_size": 512, 00:14:10.460 "num_blocks": 65536, 00:14:10.460 "uuid": "de3a38cd-029d-401f-99b3-7f3087f7772e", 00:14:10.460 "assigned_rate_limits": { 00:14:10.460 "rw_ios_per_sec": 0, 00:14:10.460 "rw_mbytes_per_sec": 0, 00:14:10.460 "r_mbytes_per_sec": 0, 00:14:10.460 "w_mbytes_per_sec": 0 00:14:10.460 }, 00:14:10.460 "claimed": true, 00:14:10.460 "claim_type": "exclusive_write", 00:14:10.460 "zoned": false, 00:14:10.460 "supported_io_types": { 00:14:10.460 "read": true, 00:14:10.460 "write": true, 00:14:10.460 "unmap": true, 00:14:10.460 "write_zeroes": true, 00:14:10.460 "flush": true, 00:14:10.460 "reset": true, 00:14:10.460 "compare": false, 00:14:10.460 "compare_and_write": false, 00:14:10.460 "abort": true, 00:14:10.460 "nvme_admin": false, 00:14:10.460 "nvme_io": false 00:14:10.460 }, 00:14:10.460 "memory_domains": [ 00:14:10.460 { 00:14:10.460 "dma_device_id": "system", 00:14:10.460 "dma_device_type": 1 00:14:10.460 }, 00:14:10.460 { 00:14:10.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.460 "dma_device_type": 2 00:14:10.460 } 00:14:10.460 ], 00:14:10.460 "driver_specific": {} 00:14:10.460 } 00:14:10.460 ] 00:14:10.460 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:10.460 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:10.460 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:10.460 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:10.460 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.461 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.719 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.719 "name": "Existed_Raid", 00:14:10.719 "uuid": "4ef04676-e22c-46c0-a67c-1e72cb1a6565", 00:14:10.719 "strip_size_kb": 0, 00:14:10.719 "state": "online", 00:14:10.719 "raid_level": "raid1", 00:14:10.719 "superblock": false, 00:14:10.719 "num_base_bdevs": 2, 00:14:10.719 "num_base_bdevs_discovered": 2, 00:14:10.719 "num_base_bdevs_operational": 2, 00:14:10.719 "base_bdevs_list": [ 00:14:10.719 { 00:14:10.719 "name": "BaseBdev1", 00:14:10.719 "uuid": "69b1f32c-9b45-42df-8312-e0e47a310987", 00:14:10.719 "is_configured": true, 00:14:10.719 "data_offset": 0, 00:14:10.719 "data_size": 65536 00:14:10.719 }, 00:14:10.719 { 00:14:10.719 "name": "BaseBdev2", 00:14:10.719 "uuid": "de3a38cd-029d-401f-99b3-7f3087f7772e", 00:14:10.719 "is_configured": true, 00:14:10.719 "data_offset": 0, 00:14:10.719 "data_size": 65536 00:14:10.719 } 00:14:10.719 ] 00:14:10.719 }' 00:14:10.719 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.719 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:11.288 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:11.547 [2024-06-10 18:58:26.153157] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.547 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:11.547 "name": "Existed_Raid", 00:14:11.547 "aliases": [ 00:14:11.547 "4ef04676-e22c-46c0-a67c-1e72cb1a6565" 00:14:11.547 ], 00:14:11.547 "product_name": "Raid Volume", 00:14:11.547 "block_size": 512, 00:14:11.547 "num_blocks": 65536, 00:14:11.547 "uuid": "4ef04676-e22c-46c0-a67c-1e72cb1a6565", 00:14:11.547 "assigned_rate_limits": { 00:14:11.547 "rw_ios_per_sec": 0, 00:14:11.547 "rw_mbytes_per_sec": 0, 00:14:11.547 "r_mbytes_per_sec": 0, 00:14:11.547 "w_mbytes_per_sec": 0 00:14:11.547 }, 00:14:11.547 "claimed": false, 00:14:11.547 "zoned": false, 00:14:11.547 "supported_io_types": { 00:14:11.547 "read": true, 00:14:11.547 "write": true, 00:14:11.547 "unmap": false, 00:14:11.547 "write_zeroes": true, 00:14:11.547 "flush": false, 00:14:11.547 "reset": true, 00:14:11.547 "compare": false, 00:14:11.547 "compare_and_write": false, 00:14:11.547 "abort": false, 00:14:11.547 "nvme_admin": false, 00:14:11.547 "nvme_io": false 00:14:11.547 }, 00:14:11.547 "memory_domains": [ 00:14:11.547 { 00:14:11.547 "dma_device_id": "system", 00:14:11.547 "dma_device_type": 1 00:14:11.547 }, 00:14:11.547 { 00:14:11.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.547 "dma_device_type": 2 00:14:11.547 }, 00:14:11.547 { 00:14:11.547 "dma_device_id": "system", 00:14:11.547 "dma_device_type": 1 00:14:11.547 }, 00:14:11.547 { 00:14:11.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.547 "dma_device_type": 2 00:14:11.547 } 00:14:11.547 ], 00:14:11.547 "driver_specific": { 00:14:11.547 "raid": { 00:14:11.547 "uuid": "4ef04676-e22c-46c0-a67c-1e72cb1a6565", 00:14:11.547 "strip_size_kb": 0, 00:14:11.547 "state": "online", 00:14:11.547 "raid_level": "raid1", 00:14:11.547 "superblock": false, 00:14:11.547 "num_base_bdevs": 2, 00:14:11.547 "num_base_bdevs_discovered": 2, 00:14:11.547 "num_base_bdevs_operational": 2, 00:14:11.547 "base_bdevs_list": [ 00:14:11.547 { 00:14:11.548 "name": "BaseBdev1", 00:14:11.548 "uuid": "69b1f32c-9b45-42df-8312-e0e47a310987", 00:14:11.548 "is_configured": true, 00:14:11.548 "data_offset": 0, 00:14:11.548 "data_size": 65536 00:14:11.548 }, 00:14:11.548 { 00:14:11.548 "name": "BaseBdev2", 00:14:11.548 "uuid": "de3a38cd-029d-401f-99b3-7f3087f7772e", 00:14:11.548 "is_configured": true, 00:14:11.548 "data_offset": 0, 00:14:11.548 "data_size": 65536 00:14:11.548 } 00:14:11.548 ] 00:14:11.548 } 00:14:11.548 } 00:14:11.548 }' 00:14:11.548 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.548 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:11.548 BaseBdev2' 00:14:11.548 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.548 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:11.548 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.807 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.807 "name": "BaseBdev1", 00:14:11.807 "aliases": [ 00:14:11.807 "69b1f32c-9b45-42df-8312-e0e47a310987" 00:14:11.807 ], 00:14:11.807 "product_name": "Malloc disk", 00:14:11.807 "block_size": 512, 00:14:11.807 "num_blocks": 65536, 00:14:11.807 "uuid": "69b1f32c-9b45-42df-8312-e0e47a310987", 00:14:11.807 "assigned_rate_limits": { 00:14:11.807 "rw_ios_per_sec": 0, 00:14:11.807 "rw_mbytes_per_sec": 0, 00:14:11.807 "r_mbytes_per_sec": 0, 00:14:11.807 "w_mbytes_per_sec": 0 00:14:11.807 }, 00:14:11.807 "claimed": true, 00:14:11.807 "claim_type": "exclusive_write", 00:14:11.807 "zoned": false, 00:14:11.807 "supported_io_types": { 00:14:11.807 "read": true, 00:14:11.807 "write": true, 00:14:11.807 "unmap": true, 00:14:11.807 "write_zeroes": true, 00:14:11.807 "flush": true, 00:14:11.807 "reset": true, 00:14:11.807 "compare": false, 00:14:11.807 "compare_and_write": false, 00:14:11.807 "abort": true, 00:14:11.807 "nvme_admin": false, 00:14:11.807 "nvme_io": false 00:14:11.807 }, 00:14:11.807 "memory_domains": [ 00:14:11.807 { 00:14:11.807 "dma_device_id": "system", 00:14:11.807 "dma_device_type": 1 00:14:11.807 }, 00:14:11.807 { 00:14:11.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.807 "dma_device_type": 2 00:14:11.807 } 00:14:11.807 ], 00:14:11.807 "driver_specific": {} 00:14:11.807 }' 00:14:11.807 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.807 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.807 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.807 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.807 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:12.066 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:12.325 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:12.325 "name": "BaseBdev2", 00:14:12.325 "aliases": [ 00:14:12.325 "de3a38cd-029d-401f-99b3-7f3087f7772e" 00:14:12.325 ], 00:14:12.325 "product_name": "Malloc disk", 00:14:12.325 "block_size": 512, 00:14:12.325 "num_blocks": 65536, 00:14:12.325 "uuid": "de3a38cd-029d-401f-99b3-7f3087f7772e", 00:14:12.325 "assigned_rate_limits": { 00:14:12.325 "rw_ios_per_sec": 0, 00:14:12.325 "rw_mbytes_per_sec": 0, 00:14:12.325 "r_mbytes_per_sec": 0, 00:14:12.325 "w_mbytes_per_sec": 0 00:14:12.325 }, 00:14:12.325 "claimed": true, 00:14:12.325 "claim_type": "exclusive_write", 00:14:12.325 "zoned": false, 00:14:12.325 "supported_io_types": { 00:14:12.325 "read": true, 00:14:12.325 "write": true, 00:14:12.325 "unmap": true, 00:14:12.325 "write_zeroes": true, 00:14:12.325 "flush": true, 00:14:12.325 "reset": true, 00:14:12.325 "compare": false, 00:14:12.325 "compare_and_write": false, 00:14:12.325 "abort": true, 00:14:12.325 "nvme_admin": false, 00:14:12.325 "nvme_io": false 00:14:12.325 }, 00:14:12.325 "memory_domains": [ 00:14:12.325 { 00:14:12.325 "dma_device_id": "system", 00:14:12.325 "dma_device_type": 1 00:14:12.325 }, 00:14:12.325 { 00:14:12.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.325 "dma_device_type": 2 00:14:12.325 } 00:14:12.325 ], 00:14:12.325 "driver_specific": {} 00:14:12.325 }' 00:14:12.325 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.325 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:12.325 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:12.325 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.584 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.843 [2024-06-10 18:58:27.516560] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.843 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.101 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.101 "name": "Existed_Raid", 00:14:13.101 "uuid": "4ef04676-e22c-46c0-a67c-1e72cb1a6565", 00:14:13.101 "strip_size_kb": 0, 00:14:13.101 "state": "online", 00:14:13.101 "raid_level": "raid1", 00:14:13.101 "superblock": false, 00:14:13.101 "num_base_bdevs": 2, 00:14:13.101 "num_base_bdevs_discovered": 1, 00:14:13.101 "num_base_bdevs_operational": 1, 00:14:13.101 "base_bdevs_list": [ 00:14:13.101 { 00:14:13.101 "name": null, 00:14:13.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.101 "is_configured": false, 00:14:13.101 "data_offset": 0, 00:14:13.101 "data_size": 65536 00:14:13.101 }, 00:14:13.101 { 00:14:13.101 "name": "BaseBdev2", 00:14:13.101 "uuid": "de3a38cd-029d-401f-99b3-7f3087f7772e", 00:14:13.101 "is_configured": true, 00:14:13.101 "data_offset": 0, 00:14:13.101 "data_size": 65536 00:14:13.101 } 00:14:13.101 ] 00:14:13.101 }' 00:14:13.101 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.101 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.665 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:13.665 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:13.665 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.665 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:13.923 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:13.923 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.923 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:14.181 [2024-06-10 18:58:28.752757] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:14.181 [2024-06-10 18:58:28.752822] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.181 [2024-06-10 18:58:28.763276] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.181 [2024-06-10 18:58:28.763305] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.181 [2024-06-10 18:58:28.763316] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcc55f0 name Existed_Raid, state offline 00:14:14.181 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:14.181 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:14.181 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.181 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1633842 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1633842 ']' 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1633842 00:14:14.441 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1633842 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1633842' 00:14:14.441 killing process with pid 1633842 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1633842 00:14:14.441 [2024-06-10 18:58:29.053827] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.441 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1633842 00:14:14.441 [2024-06-10 18:58:29.054665] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:14.700 00:14:14.700 real 0m9.788s 00:14:14.700 user 0m17.363s 00:14:14.700 sys 0m1.858s 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.700 ************************************ 00:14:14.700 END TEST raid_state_function_test 00:14:14.700 ************************************ 00:14:14.700 18:58:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:14.700 18:58:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:14:14.700 18:58:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:14.700 18:58:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.700 ************************************ 00:14:14.700 START TEST raid_state_function_test_sb 00:14:14.700 ************************************ 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1635881 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1635881' 00:14:14.700 Process raid pid: 1635881 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1635881 /var/tmp/spdk-raid.sock 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1635881 ']' 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:14.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:14.700 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.700 [2024-06-10 18:58:29.389414] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:14:14.700 [2024-06-10 18:58:29.389471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.0 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.1 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.2 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.3 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.4 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.5 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.6 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:01.7 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.0 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.1 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.2 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.3 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.4 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.5 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.6 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b6:02.7 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.0 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.1 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.2 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.3 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.4 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.5 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.6 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:01.7 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.0 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.1 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.2 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.3 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.4 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.5 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.6 cannot be used 00:14:14.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:14.960 EAL: Requested device 0000:b8:02.7 cannot be used 00:14:14.960 [2024-06-10 18:58:29.520696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.960 [2024-06-10 18:58:29.606821] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.960 [2024-06-10 18:58:29.664871] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.960 [2024-06-10 18:58:29.664904] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.898 [2024-06-10 18:58:30.498529] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.898 [2024-06-10 18:58:30.498568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.898 [2024-06-10 18:58:30.498587] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.898 [2024-06-10 18:58:30.498598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.898 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.157 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.157 "name": "Existed_Raid", 00:14:16.157 "uuid": "c01e0cf5-237a-41a3-9f7a-b77f2436c310", 00:14:16.157 "strip_size_kb": 0, 00:14:16.157 "state": "configuring", 00:14:16.157 "raid_level": "raid1", 00:14:16.157 "superblock": true, 00:14:16.157 "num_base_bdevs": 2, 00:14:16.157 "num_base_bdevs_discovered": 0, 00:14:16.157 "num_base_bdevs_operational": 2, 00:14:16.157 "base_bdevs_list": [ 00:14:16.157 { 00:14:16.157 "name": "BaseBdev1", 00:14:16.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.157 "is_configured": false, 00:14:16.157 "data_offset": 0, 00:14:16.157 "data_size": 0 00:14:16.157 }, 00:14:16.157 { 00:14:16.157 "name": "BaseBdev2", 00:14:16.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.157 "is_configured": false, 00:14:16.157 "data_offset": 0, 00:14:16.157 "data_size": 0 00:14:16.157 } 00:14:16.157 ] 00:14:16.157 }' 00:14:16.157 18:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.157 18:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.725 18:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:16.984 [2024-06-10 18:58:31.529228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.984 [2024-06-10 18:58:31.529255] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2195f10 name Existed_Raid, state configuring 00:14:16.984 18:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:17.243 [2024-06-10 18:58:31.753821] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.243 [2024-06-10 18:58:31.753847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.243 [2024-06-10 18:58:31.753856] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.243 [2024-06-10 18:58:31.753866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.243 18:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.243 [2024-06-10 18:58:31.984002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.243 BaseBdev1 00:14:17.243 18:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:17.513 18:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:17.513 18:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:17.513 18:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:17.513 18:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:17.513 18:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:17.513 18:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.514 18:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.780 [ 00:14:17.780 { 00:14:17.780 "name": "BaseBdev1", 00:14:17.780 "aliases": [ 00:14:17.780 "8eb69e00-b73b-4d21-b0ac-981054e14f79" 00:14:17.780 ], 00:14:17.780 "product_name": "Malloc disk", 00:14:17.780 "block_size": 512, 00:14:17.780 "num_blocks": 65536, 00:14:17.780 "uuid": "8eb69e00-b73b-4d21-b0ac-981054e14f79", 00:14:17.780 "assigned_rate_limits": { 00:14:17.780 "rw_ios_per_sec": 0, 00:14:17.780 "rw_mbytes_per_sec": 0, 00:14:17.780 "r_mbytes_per_sec": 0, 00:14:17.780 "w_mbytes_per_sec": 0 00:14:17.780 }, 00:14:17.780 "claimed": true, 00:14:17.780 "claim_type": "exclusive_write", 00:14:17.780 "zoned": false, 00:14:17.780 "supported_io_types": { 00:14:17.780 "read": true, 00:14:17.780 "write": true, 00:14:17.780 "unmap": true, 00:14:17.780 "write_zeroes": true, 00:14:17.780 "flush": true, 00:14:17.780 "reset": true, 00:14:17.780 "compare": false, 00:14:17.780 "compare_and_write": false, 00:14:17.780 "abort": true, 00:14:17.780 "nvme_admin": false, 00:14:17.780 "nvme_io": false 00:14:17.780 }, 00:14:17.780 "memory_domains": [ 00:14:17.780 { 00:14:17.780 "dma_device_id": "system", 00:14:17.780 "dma_device_type": 1 00:14:17.780 }, 00:14:17.780 { 00:14:17.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.780 "dma_device_type": 2 00:14:17.780 } 00:14:17.780 ], 00:14:17.780 "driver_specific": {} 00:14:17.780 } 00:14:17.780 ] 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.780 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.040 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.040 "name": "Existed_Raid", 00:14:18.040 "uuid": "aad4368a-c9fe-49da-9a98-bdbb2159d538", 00:14:18.040 "strip_size_kb": 0, 00:14:18.040 "state": "configuring", 00:14:18.040 "raid_level": "raid1", 00:14:18.040 "superblock": true, 00:14:18.040 "num_base_bdevs": 2, 00:14:18.040 "num_base_bdevs_discovered": 1, 00:14:18.040 "num_base_bdevs_operational": 2, 00:14:18.040 "base_bdevs_list": [ 00:14:18.040 { 00:14:18.040 "name": "BaseBdev1", 00:14:18.040 "uuid": "8eb69e00-b73b-4d21-b0ac-981054e14f79", 00:14:18.040 "is_configured": true, 00:14:18.040 "data_offset": 2048, 00:14:18.040 "data_size": 63488 00:14:18.040 }, 00:14:18.040 { 00:14:18.040 "name": "BaseBdev2", 00:14:18.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.040 "is_configured": false, 00:14:18.040 "data_offset": 0, 00:14:18.040 "data_size": 0 00:14:18.040 } 00:14:18.040 ] 00:14:18.040 }' 00:14:18.040 18:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.040 18:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.607 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:18.866 [2024-06-10 18:58:33.439831] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.866 [2024-06-10 18:58:33.439865] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2195800 name Existed_Raid, state configuring 00:14:18.866 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:19.125 [2024-06-10 18:58:33.652416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.125 [2024-06-10 18:58:33.653769] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.125 [2024-06-10 18:58:33.653800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.125 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.384 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.384 "name": "Existed_Raid", 00:14:19.384 "uuid": "57f5d0c4-ce1c-4a96-984b-81fc9c179433", 00:14:19.384 "strip_size_kb": 0, 00:14:19.384 "state": "configuring", 00:14:19.384 "raid_level": "raid1", 00:14:19.384 "superblock": true, 00:14:19.384 "num_base_bdevs": 2, 00:14:19.384 "num_base_bdevs_discovered": 1, 00:14:19.384 "num_base_bdevs_operational": 2, 00:14:19.384 "base_bdevs_list": [ 00:14:19.384 { 00:14:19.384 "name": "BaseBdev1", 00:14:19.384 "uuid": "8eb69e00-b73b-4d21-b0ac-981054e14f79", 00:14:19.384 "is_configured": true, 00:14:19.384 "data_offset": 2048, 00:14:19.384 "data_size": 63488 00:14:19.384 }, 00:14:19.384 { 00:14:19.384 "name": "BaseBdev2", 00:14:19.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.384 "is_configured": false, 00:14:19.384 "data_offset": 0, 00:14:19.384 "data_size": 0 00:14:19.384 } 00:14:19.384 ] 00:14:19.384 }' 00:14:19.384 18:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.384 18:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.952 18:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.953 [2024-06-10 18:58:34.642105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.953 [2024-06-10 18:58:34.642236] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21965f0 00:14:19.953 [2024-06-10 18:58:34.642249] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.953 [2024-06-10 18:58:34.642407] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23483b0 00:14:19.953 [2024-06-10 18:58:34.642520] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21965f0 00:14:19.953 [2024-06-10 18:58:34.642530] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x21965f0 00:14:19.953 [2024-06-10 18:58:34.642623] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.953 BaseBdev2 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:19.953 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.212 18:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.470 [ 00:14:20.470 { 00:14:20.470 "name": "BaseBdev2", 00:14:20.470 "aliases": [ 00:14:20.470 "bef45c6f-7dab-41cf-973e-892810dd68d9" 00:14:20.470 ], 00:14:20.470 "product_name": "Malloc disk", 00:14:20.470 "block_size": 512, 00:14:20.470 "num_blocks": 65536, 00:14:20.470 "uuid": "bef45c6f-7dab-41cf-973e-892810dd68d9", 00:14:20.470 "assigned_rate_limits": { 00:14:20.470 "rw_ios_per_sec": 0, 00:14:20.470 "rw_mbytes_per_sec": 0, 00:14:20.470 "r_mbytes_per_sec": 0, 00:14:20.470 "w_mbytes_per_sec": 0 00:14:20.470 }, 00:14:20.470 "claimed": true, 00:14:20.470 "claim_type": "exclusive_write", 00:14:20.470 "zoned": false, 00:14:20.470 "supported_io_types": { 00:14:20.470 "read": true, 00:14:20.470 "write": true, 00:14:20.470 "unmap": true, 00:14:20.470 "write_zeroes": true, 00:14:20.470 "flush": true, 00:14:20.470 "reset": true, 00:14:20.470 "compare": false, 00:14:20.470 "compare_and_write": false, 00:14:20.470 "abort": true, 00:14:20.470 "nvme_admin": false, 00:14:20.470 "nvme_io": false 00:14:20.470 }, 00:14:20.470 "memory_domains": [ 00:14:20.470 { 00:14:20.470 "dma_device_id": "system", 00:14:20.470 "dma_device_type": 1 00:14:20.470 }, 00:14:20.470 { 00:14:20.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.470 "dma_device_type": 2 00:14:20.470 } 00:14:20.470 ], 00:14:20.470 "driver_specific": {} 00:14:20.470 } 00:14:20.471 ] 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.471 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.729 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.729 "name": "Existed_Raid", 00:14:20.729 "uuid": "57f5d0c4-ce1c-4a96-984b-81fc9c179433", 00:14:20.729 "strip_size_kb": 0, 00:14:20.729 "state": "online", 00:14:20.729 "raid_level": "raid1", 00:14:20.729 "superblock": true, 00:14:20.729 "num_base_bdevs": 2, 00:14:20.729 "num_base_bdevs_discovered": 2, 00:14:20.729 "num_base_bdevs_operational": 2, 00:14:20.729 "base_bdevs_list": [ 00:14:20.729 { 00:14:20.729 "name": "BaseBdev1", 00:14:20.729 "uuid": "8eb69e00-b73b-4d21-b0ac-981054e14f79", 00:14:20.729 "is_configured": true, 00:14:20.729 "data_offset": 2048, 00:14:20.729 "data_size": 63488 00:14:20.729 }, 00:14:20.729 { 00:14:20.729 "name": "BaseBdev2", 00:14:20.729 "uuid": "bef45c6f-7dab-41cf-973e-892810dd68d9", 00:14:20.729 "is_configured": true, 00:14:20.729 "data_offset": 2048, 00:14:20.729 "data_size": 63488 00:14:20.729 } 00:14:20.729 ] 00:14:20.729 }' 00:14:20.729 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.729 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:21.296 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:21.556 [2024-06-10 18:58:36.122228] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.556 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:21.556 "name": "Existed_Raid", 00:14:21.556 "aliases": [ 00:14:21.556 "57f5d0c4-ce1c-4a96-984b-81fc9c179433" 00:14:21.556 ], 00:14:21.556 "product_name": "Raid Volume", 00:14:21.556 "block_size": 512, 00:14:21.556 "num_blocks": 63488, 00:14:21.556 "uuid": "57f5d0c4-ce1c-4a96-984b-81fc9c179433", 00:14:21.556 "assigned_rate_limits": { 00:14:21.556 "rw_ios_per_sec": 0, 00:14:21.556 "rw_mbytes_per_sec": 0, 00:14:21.556 "r_mbytes_per_sec": 0, 00:14:21.556 "w_mbytes_per_sec": 0 00:14:21.556 }, 00:14:21.556 "claimed": false, 00:14:21.556 "zoned": false, 00:14:21.556 "supported_io_types": { 00:14:21.556 "read": true, 00:14:21.556 "write": true, 00:14:21.556 "unmap": false, 00:14:21.556 "write_zeroes": true, 00:14:21.556 "flush": false, 00:14:21.556 "reset": true, 00:14:21.556 "compare": false, 00:14:21.556 "compare_and_write": false, 00:14:21.556 "abort": false, 00:14:21.556 "nvme_admin": false, 00:14:21.556 "nvme_io": false 00:14:21.556 }, 00:14:21.556 "memory_domains": [ 00:14:21.556 { 00:14:21.556 "dma_device_id": "system", 00:14:21.556 "dma_device_type": 1 00:14:21.556 }, 00:14:21.556 { 00:14:21.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.556 "dma_device_type": 2 00:14:21.556 }, 00:14:21.556 { 00:14:21.556 "dma_device_id": "system", 00:14:21.556 "dma_device_type": 1 00:14:21.556 }, 00:14:21.556 { 00:14:21.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.556 "dma_device_type": 2 00:14:21.556 } 00:14:21.556 ], 00:14:21.556 "driver_specific": { 00:14:21.556 "raid": { 00:14:21.556 "uuid": "57f5d0c4-ce1c-4a96-984b-81fc9c179433", 00:14:21.556 "strip_size_kb": 0, 00:14:21.556 "state": "online", 00:14:21.556 "raid_level": "raid1", 00:14:21.556 "superblock": true, 00:14:21.556 "num_base_bdevs": 2, 00:14:21.556 "num_base_bdevs_discovered": 2, 00:14:21.556 "num_base_bdevs_operational": 2, 00:14:21.556 "base_bdevs_list": [ 00:14:21.556 { 00:14:21.556 "name": "BaseBdev1", 00:14:21.556 "uuid": "8eb69e00-b73b-4d21-b0ac-981054e14f79", 00:14:21.556 "is_configured": true, 00:14:21.556 "data_offset": 2048, 00:14:21.556 "data_size": 63488 00:14:21.556 }, 00:14:21.556 { 00:14:21.556 "name": "BaseBdev2", 00:14:21.556 "uuid": "bef45c6f-7dab-41cf-973e-892810dd68d9", 00:14:21.556 "is_configured": true, 00:14:21.556 "data_offset": 2048, 00:14:21.556 "data_size": 63488 00:14:21.556 } 00:14:21.556 ] 00:14:21.556 } 00:14:21.556 } 00:14:21.556 }' 00:14:21.556 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.556 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:21.556 BaseBdev2' 00:14:21.556 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.556 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.556 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:21.816 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.816 "name": "BaseBdev1", 00:14:21.816 "aliases": [ 00:14:21.816 "8eb69e00-b73b-4d21-b0ac-981054e14f79" 00:14:21.816 ], 00:14:21.816 "product_name": "Malloc disk", 00:14:21.816 "block_size": 512, 00:14:21.816 "num_blocks": 65536, 00:14:21.816 "uuid": "8eb69e00-b73b-4d21-b0ac-981054e14f79", 00:14:21.816 "assigned_rate_limits": { 00:14:21.816 "rw_ios_per_sec": 0, 00:14:21.816 "rw_mbytes_per_sec": 0, 00:14:21.816 "r_mbytes_per_sec": 0, 00:14:21.816 "w_mbytes_per_sec": 0 00:14:21.816 }, 00:14:21.816 "claimed": true, 00:14:21.816 "claim_type": "exclusive_write", 00:14:21.816 "zoned": false, 00:14:21.816 "supported_io_types": { 00:14:21.816 "read": true, 00:14:21.816 "write": true, 00:14:21.816 "unmap": true, 00:14:21.816 "write_zeroes": true, 00:14:21.816 "flush": true, 00:14:21.816 "reset": true, 00:14:21.816 "compare": false, 00:14:21.816 "compare_and_write": false, 00:14:21.816 "abort": true, 00:14:21.816 "nvme_admin": false, 00:14:21.816 "nvme_io": false 00:14:21.816 }, 00:14:21.816 "memory_domains": [ 00:14:21.816 { 00:14:21.816 "dma_device_id": "system", 00:14:21.816 "dma_device_type": 1 00:14:21.816 }, 00:14:21.816 { 00:14:21.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.816 "dma_device_type": 2 00:14:21.816 } 00:14:21.816 ], 00:14:21.816 "driver_specific": {} 00:14:21.816 }' 00:14:21.816 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.816 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.816 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.816 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.816 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:22.075 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:22.334 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:22.334 "name": "BaseBdev2", 00:14:22.334 "aliases": [ 00:14:22.334 "bef45c6f-7dab-41cf-973e-892810dd68d9" 00:14:22.334 ], 00:14:22.334 "product_name": "Malloc disk", 00:14:22.334 "block_size": 512, 00:14:22.334 "num_blocks": 65536, 00:14:22.334 "uuid": "bef45c6f-7dab-41cf-973e-892810dd68d9", 00:14:22.334 "assigned_rate_limits": { 00:14:22.334 "rw_ios_per_sec": 0, 00:14:22.334 "rw_mbytes_per_sec": 0, 00:14:22.334 "r_mbytes_per_sec": 0, 00:14:22.334 "w_mbytes_per_sec": 0 00:14:22.334 }, 00:14:22.334 "claimed": true, 00:14:22.334 "claim_type": "exclusive_write", 00:14:22.334 "zoned": false, 00:14:22.334 "supported_io_types": { 00:14:22.334 "read": true, 00:14:22.334 "write": true, 00:14:22.334 "unmap": true, 00:14:22.334 "write_zeroes": true, 00:14:22.334 "flush": true, 00:14:22.334 "reset": true, 00:14:22.334 "compare": false, 00:14:22.334 "compare_and_write": false, 00:14:22.334 "abort": true, 00:14:22.334 "nvme_admin": false, 00:14:22.334 "nvme_io": false 00:14:22.334 }, 00:14:22.334 "memory_domains": [ 00:14:22.334 { 00:14:22.334 "dma_device_id": "system", 00:14:22.334 "dma_device_type": 1 00:14:22.334 }, 00:14:22.334 { 00:14:22.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.334 "dma_device_type": 2 00:14:22.334 } 00:14:22.334 ], 00:14:22.334 "driver_specific": {} 00:14:22.334 }' 00:14:22.334 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.334 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.334 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.592 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.851 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.851 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:22.851 [2024-06-10 18:58:37.565873] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.851 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:22.851 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:22.851 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:22.851 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.852 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.110 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.110 "name": "Existed_Raid", 00:14:23.110 "uuid": "57f5d0c4-ce1c-4a96-984b-81fc9c179433", 00:14:23.110 "strip_size_kb": 0, 00:14:23.110 "state": "online", 00:14:23.110 "raid_level": "raid1", 00:14:23.110 "superblock": true, 00:14:23.110 "num_base_bdevs": 2, 00:14:23.110 "num_base_bdevs_discovered": 1, 00:14:23.110 "num_base_bdevs_operational": 1, 00:14:23.110 "base_bdevs_list": [ 00:14:23.110 { 00:14:23.110 "name": null, 00:14:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.110 "is_configured": false, 00:14:23.110 "data_offset": 2048, 00:14:23.110 "data_size": 63488 00:14:23.110 }, 00:14:23.110 { 00:14:23.110 "name": "BaseBdev2", 00:14:23.110 "uuid": "bef45c6f-7dab-41cf-973e-892810dd68d9", 00:14:23.110 "is_configured": true, 00:14:23.110 "data_offset": 2048, 00:14:23.111 "data_size": 63488 00:14:23.111 } 00:14:23.111 ] 00:14:23.111 }' 00:14:23.111 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.111 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:23.679 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:23.679 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.679 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:23.938 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:23.938 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.938 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:24.196 [2024-06-10 18:58:38.790152] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.196 [2024-06-10 18:58:38.790221] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.196 [2024-06-10 18:58:38.800556] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.196 [2024-06-10 18:58:38.800593] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.197 [2024-06-10 18:58:38.800604] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21965f0 name Existed_Raid, state offline 00:14:24.197 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:24.197 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:24.197 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.197 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1635881 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1635881 ']' 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1635881 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1635881 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1635881' 00:14:24.468 killing process with pid 1635881 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1635881 00:14:24.468 [2024-06-10 18:58:39.104507] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.468 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1635881 00:14:24.468 [2024-06-10 18:58:39.105352] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.741 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:24.741 00:14:24.741 real 0m9.972s 00:14:24.741 user 0m17.682s 00:14:24.741 sys 0m1.856s 00:14:24.741 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:24.741 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.741 ************************************ 00:14:24.741 END TEST raid_state_function_test_sb 00:14:24.741 ************************************ 00:14:24.741 18:58:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:24.741 18:58:39 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:14:24.741 18:58:39 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:24.741 18:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.741 ************************************ 00:14:24.741 START TEST raid_superblock_test 00:14:24.741 ************************************ 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1637749 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1637749 /var/tmp/spdk-raid.sock 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1637749 ']' 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:24.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:24.741 18:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.741 [2024-06-10 18:58:39.440851] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:14:24.741 [2024-06-10 18:58:39.440915] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637749 ] 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.0 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.1 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.2 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.3 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.4 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.5 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.6 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:01.7 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.0 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.1 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.2 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.3 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.4 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.5 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.6 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b6:02.7 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.0 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.1 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.2 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.3 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.4 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.5 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.6 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:01.7 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.0 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.1 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.2 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.3 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.4 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.5 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.6 cannot be used 00:14:25.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.001 EAL: Requested device 0000:b8:02.7 cannot be used 00:14:25.001 [2024-06-10 18:58:39.574120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.001 [2024-06-10 18:58:39.661549] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.001 [2024-06-10 18:58:39.727101] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.001 [2024-06-10 18:58:39.727139] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:25.939 malloc1 00:14:25.939 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.199 [2024-06-10 18:58:40.788108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.199 [2024-06-10 18:58:40.788156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.199 [2024-06-10 18:58:40.788175] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197eb70 00:14:26.199 [2024-06-10 18:58:40.788187] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.199 [2024-06-10 18:58:40.789703] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.199 [2024-06-10 18:58:40.789730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.199 pt1 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.199 18:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:26.458 malloc2 00:14:26.458 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.717 [2024-06-10 18:58:41.241810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.717 [2024-06-10 18:58:41.241850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.717 [2024-06-10 18:58:41.241866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197ff70 00:14:26.717 [2024-06-10 18:58:41.241878] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.717 [2024-06-10 18:58:41.243287] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.717 [2024-06-10 18:58:41.243313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.717 pt2 00:14:26.717 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:26.717 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:26.717 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:26.717 [2024-06-10 18:58:41.470420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.717 [2024-06-10 18:58:41.471590] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.717 [2024-06-10 18:58:41.471723] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b22870 00:14:26.717 [2024-06-10 18:58:41.471735] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.717 [2024-06-10 18:58:41.471906] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b18290 00:14:26.717 [2024-06-10 18:58:41.472036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b22870 00:14:26.717 [2024-06-10 18:58:41.472046] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b22870 00:14:26.717 [2024-06-10 18:58:41.472134] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.976 "name": "raid_bdev1", 00:14:26.976 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:26.976 "strip_size_kb": 0, 00:14:26.976 "state": "online", 00:14:26.976 "raid_level": "raid1", 00:14:26.976 "superblock": true, 00:14:26.976 "num_base_bdevs": 2, 00:14:26.976 "num_base_bdevs_discovered": 2, 00:14:26.976 "num_base_bdevs_operational": 2, 00:14:26.976 "base_bdevs_list": [ 00:14:26.976 { 00:14:26.976 "name": "pt1", 00:14:26.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.976 "is_configured": true, 00:14:26.976 "data_offset": 2048, 00:14:26.976 "data_size": 63488 00:14:26.976 }, 00:14:26.976 { 00:14:26.976 "name": "pt2", 00:14:26.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.976 "is_configured": true, 00:14:26.976 "data_offset": 2048, 00:14:26.976 "data_size": 63488 00:14:26.976 } 00:14:26.976 ] 00:14:26.976 }' 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.976 18:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:27.551 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:27.811 [2024-06-10 18:58:42.501400] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.811 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:27.811 "name": "raid_bdev1", 00:14:27.811 "aliases": [ 00:14:27.811 "cccb02af-0e92-45ea-a152-6d55d25ab4d6" 00:14:27.811 ], 00:14:27.811 "product_name": "Raid Volume", 00:14:27.811 "block_size": 512, 00:14:27.811 "num_blocks": 63488, 00:14:27.811 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:27.811 "assigned_rate_limits": { 00:14:27.811 "rw_ios_per_sec": 0, 00:14:27.811 "rw_mbytes_per_sec": 0, 00:14:27.811 "r_mbytes_per_sec": 0, 00:14:27.811 "w_mbytes_per_sec": 0 00:14:27.811 }, 00:14:27.811 "claimed": false, 00:14:27.811 "zoned": false, 00:14:27.811 "supported_io_types": { 00:14:27.811 "read": true, 00:14:27.811 "write": true, 00:14:27.811 "unmap": false, 00:14:27.811 "write_zeroes": true, 00:14:27.811 "flush": false, 00:14:27.811 "reset": true, 00:14:27.811 "compare": false, 00:14:27.811 "compare_and_write": false, 00:14:27.811 "abort": false, 00:14:27.811 "nvme_admin": false, 00:14:27.811 "nvme_io": false 00:14:27.811 }, 00:14:27.811 "memory_domains": [ 00:14:27.811 { 00:14:27.811 "dma_device_id": "system", 00:14:27.811 "dma_device_type": 1 00:14:27.811 }, 00:14:27.811 { 00:14:27.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.811 "dma_device_type": 2 00:14:27.811 }, 00:14:27.811 { 00:14:27.811 "dma_device_id": "system", 00:14:27.811 "dma_device_type": 1 00:14:27.811 }, 00:14:27.811 { 00:14:27.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.811 "dma_device_type": 2 00:14:27.811 } 00:14:27.811 ], 00:14:27.811 "driver_specific": { 00:14:27.811 "raid": { 00:14:27.811 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:27.811 "strip_size_kb": 0, 00:14:27.811 "state": "online", 00:14:27.811 "raid_level": "raid1", 00:14:27.811 "superblock": true, 00:14:27.811 "num_base_bdevs": 2, 00:14:27.811 "num_base_bdevs_discovered": 2, 00:14:27.811 "num_base_bdevs_operational": 2, 00:14:27.811 "base_bdevs_list": [ 00:14:27.811 { 00:14:27.811 "name": "pt1", 00:14:27.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.811 "is_configured": true, 00:14:27.811 "data_offset": 2048, 00:14:27.811 "data_size": 63488 00:14:27.811 }, 00:14:27.811 { 00:14:27.811 "name": "pt2", 00:14:27.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.811 "is_configured": true, 00:14:27.811 "data_offset": 2048, 00:14:27.811 "data_size": 63488 00:14:27.811 } 00:14:27.811 ] 00:14:27.811 } 00:14:27.811 } 00:14:27.811 }' 00:14:27.811 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:28.071 pt2' 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.071 "name": "pt1", 00:14:28.071 "aliases": [ 00:14:28.071 "00000000-0000-0000-0000-000000000001" 00:14:28.071 ], 00:14:28.071 "product_name": "passthru", 00:14:28.071 "block_size": 512, 00:14:28.071 "num_blocks": 65536, 00:14:28.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.071 "assigned_rate_limits": { 00:14:28.071 "rw_ios_per_sec": 0, 00:14:28.071 "rw_mbytes_per_sec": 0, 00:14:28.071 "r_mbytes_per_sec": 0, 00:14:28.071 "w_mbytes_per_sec": 0 00:14:28.071 }, 00:14:28.071 "claimed": true, 00:14:28.071 "claim_type": "exclusive_write", 00:14:28.071 "zoned": false, 00:14:28.071 "supported_io_types": { 00:14:28.071 "read": true, 00:14:28.071 "write": true, 00:14:28.071 "unmap": true, 00:14:28.071 "write_zeroes": true, 00:14:28.071 "flush": true, 00:14:28.071 "reset": true, 00:14:28.071 "compare": false, 00:14:28.071 "compare_and_write": false, 00:14:28.071 "abort": true, 00:14:28.071 "nvme_admin": false, 00:14:28.071 "nvme_io": false 00:14:28.071 }, 00:14:28.071 "memory_domains": [ 00:14:28.071 { 00:14:28.071 "dma_device_id": "system", 00:14:28.071 "dma_device_type": 1 00:14:28.071 }, 00:14:28.071 { 00:14:28.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.071 "dma_device_type": 2 00:14:28.071 } 00:14:28.071 ], 00:14:28.071 "driver_specific": { 00:14:28.071 "passthru": { 00:14:28.071 "name": "pt1", 00:14:28.071 "base_bdev_name": "malloc1" 00:14:28.071 } 00:14:28.071 } 00:14:28.071 }' 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.071 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.331 18:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.331 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.331 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.331 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.331 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:28.331 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.590 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.590 "name": "pt2", 00:14:28.590 "aliases": [ 00:14:28.590 "00000000-0000-0000-0000-000000000002" 00:14:28.590 ], 00:14:28.590 "product_name": "passthru", 00:14:28.590 "block_size": 512, 00:14:28.590 "num_blocks": 65536, 00:14:28.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.590 "assigned_rate_limits": { 00:14:28.590 "rw_ios_per_sec": 0, 00:14:28.590 "rw_mbytes_per_sec": 0, 00:14:28.590 "r_mbytes_per_sec": 0, 00:14:28.590 "w_mbytes_per_sec": 0 00:14:28.590 }, 00:14:28.590 "claimed": true, 00:14:28.590 "claim_type": "exclusive_write", 00:14:28.590 "zoned": false, 00:14:28.590 "supported_io_types": { 00:14:28.590 "read": true, 00:14:28.590 "write": true, 00:14:28.590 "unmap": true, 00:14:28.590 "write_zeroes": true, 00:14:28.590 "flush": true, 00:14:28.590 "reset": true, 00:14:28.590 "compare": false, 00:14:28.590 "compare_and_write": false, 00:14:28.590 "abort": true, 00:14:28.590 "nvme_admin": false, 00:14:28.590 "nvme_io": false 00:14:28.590 }, 00:14:28.590 "memory_domains": [ 00:14:28.590 { 00:14:28.590 "dma_device_id": "system", 00:14:28.590 "dma_device_type": 1 00:14:28.590 }, 00:14:28.590 { 00:14:28.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.590 "dma_device_type": 2 00:14:28.590 } 00:14:28.590 ], 00:14:28.590 "driver_specific": { 00:14:28.590 "passthru": { 00:14:28.590 "name": "pt2", 00:14:28.590 "base_bdev_name": "malloc2" 00:14:28.590 } 00:14:28.590 } 00:14:28.590 }' 00:14:28.590 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.590 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.859 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.119 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.120 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.120 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:29.120 [2024-06-10 18:58:43.856977] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.120 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cccb02af-0e92-45ea-a152-6d55d25ab4d6 00:14:29.120 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z cccb02af-0e92-45ea-a152-6d55d25ab4d6 ']' 00:14:29.120 18:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:29.379 [2024-06-10 18:58:44.025218] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.379 [2024-06-10 18:58:44.025234] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.379 [2024-06-10 18:58:44.025280] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.379 [2024-06-10 18:58:44.025330] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.379 [2024-06-10 18:58:44.025340] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b22870 name raid_bdev1, state offline 00:14:29.379 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.379 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:29.638 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:29.638 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:29.638 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:29.638 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:29.897 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:29.897 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:30.156 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:30.156 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:30.415 18:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:30.675 [2024-06-10 18:58:45.172347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:30.675 [2024-06-10 18:58:45.173603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:30.675 [2024-06-10 18:58:45.173652] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:30.675 [2024-06-10 18:58:45.173689] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:30.675 [2024-06-10 18:58:45.173707] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.675 [2024-06-10 18:58:45.173716] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x197f010 name raid_bdev1, state configuring 00:14:30.675 request: 00:14:30.675 { 00:14:30.675 "name": "raid_bdev1", 00:14:30.675 "raid_level": "raid1", 00:14:30.675 "base_bdevs": [ 00:14:30.675 "malloc1", 00:14:30.675 "malloc2" 00:14:30.675 ], 00:14:30.675 "superblock": false, 00:14:30.675 "method": "bdev_raid_create", 00:14:30.675 "req_id": 1 00:14:30.675 } 00:14:30.675 Got JSON-RPC error response 00:14:30.675 response: 00:14:30.675 { 00:14:30.675 "code": -17, 00:14:30.675 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:30.675 } 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:30.675 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:30.935 [2024-06-10 18:58:45.625491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:30.935 [2024-06-10 18:58:45.625530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.935 [2024-06-10 18:58:45.625546] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b225f0 00:14:30.935 [2024-06-10 18:58:45.625558] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.935 [2024-06-10 18:58:45.627030] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.935 [2024-06-10 18:58:45.627057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:30.935 [2024-06-10 18:58:45.627121] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:30.935 [2024-06-10 18:58:45.627143] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:30.935 pt1 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.935 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.194 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.194 "name": "raid_bdev1", 00:14:31.194 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:31.194 "strip_size_kb": 0, 00:14:31.194 "state": "configuring", 00:14:31.194 "raid_level": "raid1", 00:14:31.194 "superblock": true, 00:14:31.194 "num_base_bdevs": 2, 00:14:31.194 "num_base_bdevs_discovered": 1, 00:14:31.194 "num_base_bdevs_operational": 2, 00:14:31.194 "base_bdevs_list": [ 00:14:31.194 { 00:14:31.194 "name": "pt1", 00:14:31.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.194 "is_configured": true, 00:14:31.194 "data_offset": 2048, 00:14:31.194 "data_size": 63488 00:14:31.194 }, 00:14:31.194 { 00:14:31.194 "name": null, 00:14:31.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.194 "is_configured": false, 00:14:31.194 "data_offset": 2048, 00:14:31.194 "data_size": 63488 00:14:31.194 } 00:14:31.194 ] 00:14:31.194 }' 00:14:31.194 18:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.194 18:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.762 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:31.762 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:31.762 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:31.762 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.022 [2024-06-10 18:58:46.624113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.022 [2024-06-10 18:58:46.624157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.022 [2024-06-10 18:58:46.624174] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197eda0 00:14:32.022 [2024-06-10 18:58:46.624196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.022 [2024-06-10 18:58:46.624503] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.022 [2024-06-10 18:58:46.624518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.022 [2024-06-10 18:58:46.624573] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.022 [2024-06-10 18:58:46.624601] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.022 [2024-06-10 18:58:46.624690] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b17b90 00:14:32.022 [2024-06-10 18:58:46.624700] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.022 [2024-06-10 18:58:46.624852] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1978730 00:14:32.022 [2024-06-10 18:58:46.624964] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b17b90 00:14:32.022 [2024-06-10 18:58:46.624974] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b17b90 00:14:32.022 [2024-06-10 18:58:46.625059] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.022 pt2 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.022 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.282 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.282 "name": "raid_bdev1", 00:14:32.282 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:32.282 "strip_size_kb": 0, 00:14:32.282 "state": "online", 00:14:32.282 "raid_level": "raid1", 00:14:32.282 "superblock": true, 00:14:32.282 "num_base_bdevs": 2, 00:14:32.282 "num_base_bdevs_discovered": 2, 00:14:32.282 "num_base_bdevs_operational": 2, 00:14:32.282 "base_bdevs_list": [ 00:14:32.282 { 00:14:32.282 "name": "pt1", 00:14:32.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.282 "is_configured": true, 00:14:32.282 "data_offset": 2048, 00:14:32.282 "data_size": 63488 00:14:32.282 }, 00:14:32.282 { 00:14:32.282 "name": "pt2", 00:14:32.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.282 "is_configured": true, 00:14:32.282 "data_offset": 2048, 00:14:32.282 "data_size": 63488 00:14:32.282 } 00:14:32.282 ] 00:14:32.282 }' 00:14:32.282 18:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.282 18:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:32.851 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:33.110 [2024-06-10 18:58:47.655023] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.110 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:33.110 "name": "raid_bdev1", 00:14:33.110 "aliases": [ 00:14:33.110 "cccb02af-0e92-45ea-a152-6d55d25ab4d6" 00:14:33.110 ], 00:14:33.110 "product_name": "Raid Volume", 00:14:33.110 "block_size": 512, 00:14:33.110 "num_blocks": 63488, 00:14:33.110 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:33.110 "assigned_rate_limits": { 00:14:33.110 "rw_ios_per_sec": 0, 00:14:33.110 "rw_mbytes_per_sec": 0, 00:14:33.110 "r_mbytes_per_sec": 0, 00:14:33.110 "w_mbytes_per_sec": 0 00:14:33.110 }, 00:14:33.110 "claimed": false, 00:14:33.110 "zoned": false, 00:14:33.110 "supported_io_types": { 00:14:33.110 "read": true, 00:14:33.110 "write": true, 00:14:33.110 "unmap": false, 00:14:33.110 "write_zeroes": true, 00:14:33.110 "flush": false, 00:14:33.110 "reset": true, 00:14:33.110 "compare": false, 00:14:33.110 "compare_and_write": false, 00:14:33.110 "abort": false, 00:14:33.110 "nvme_admin": false, 00:14:33.110 "nvme_io": false 00:14:33.110 }, 00:14:33.110 "memory_domains": [ 00:14:33.110 { 00:14:33.110 "dma_device_id": "system", 00:14:33.110 "dma_device_type": 1 00:14:33.110 }, 00:14:33.110 { 00:14:33.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.110 "dma_device_type": 2 00:14:33.110 }, 00:14:33.110 { 00:14:33.110 "dma_device_id": "system", 00:14:33.110 "dma_device_type": 1 00:14:33.110 }, 00:14:33.110 { 00:14:33.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.110 "dma_device_type": 2 00:14:33.110 } 00:14:33.110 ], 00:14:33.110 "driver_specific": { 00:14:33.110 "raid": { 00:14:33.110 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:33.110 "strip_size_kb": 0, 00:14:33.110 "state": "online", 00:14:33.110 "raid_level": "raid1", 00:14:33.110 "superblock": true, 00:14:33.110 "num_base_bdevs": 2, 00:14:33.110 "num_base_bdevs_discovered": 2, 00:14:33.110 "num_base_bdevs_operational": 2, 00:14:33.110 "base_bdevs_list": [ 00:14:33.110 { 00:14:33.110 "name": "pt1", 00:14:33.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.110 "is_configured": true, 00:14:33.110 "data_offset": 2048, 00:14:33.110 "data_size": 63488 00:14:33.110 }, 00:14:33.110 { 00:14:33.110 "name": "pt2", 00:14:33.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.110 "is_configured": true, 00:14:33.110 "data_offset": 2048, 00:14:33.110 "data_size": 63488 00:14:33.110 } 00:14:33.110 ] 00:14:33.110 } 00:14:33.110 } 00:14:33.110 }' 00:14:33.110 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.110 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:33.110 pt2' 00:14:33.110 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.110 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:33.110 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.370 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.370 "name": "pt1", 00:14:33.370 "aliases": [ 00:14:33.370 "00000000-0000-0000-0000-000000000001" 00:14:33.370 ], 00:14:33.370 "product_name": "passthru", 00:14:33.370 "block_size": 512, 00:14:33.370 "num_blocks": 65536, 00:14:33.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.370 "assigned_rate_limits": { 00:14:33.370 "rw_ios_per_sec": 0, 00:14:33.370 "rw_mbytes_per_sec": 0, 00:14:33.370 "r_mbytes_per_sec": 0, 00:14:33.370 "w_mbytes_per_sec": 0 00:14:33.370 }, 00:14:33.370 "claimed": true, 00:14:33.370 "claim_type": "exclusive_write", 00:14:33.370 "zoned": false, 00:14:33.370 "supported_io_types": { 00:14:33.370 "read": true, 00:14:33.370 "write": true, 00:14:33.370 "unmap": true, 00:14:33.370 "write_zeroes": true, 00:14:33.370 "flush": true, 00:14:33.370 "reset": true, 00:14:33.370 "compare": false, 00:14:33.370 "compare_and_write": false, 00:14:33.370 "abort": true, 00:14:33.370 "nvme_admin": false, 00:14:33.370 "nvme_io": false 00:14:33.370 }, 00:14:33.370 "memory_domains": [ 00:14:33.370 { 00:14:33.370 "dma_device_id": "system", 00:14:33.370 "dma_device_type": 1 00:14:33.370 }, 00:14:33.370 { 00:14:33.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.370 "dma_device_type": 2 00:14:33.370 } 00:14:33.370 ], 00:14:33.370 "driver_specific": { 00:14:33.370 "passthru": { 00:14:33.370 "name": "pt1", 00:14:33.370 "base_bdev_name": "malloc1" 00:14:33.370 } 00:14:33.370 } 00:14:33.370 }' 00:14:33.370 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.370 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.370 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:33.370 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.370 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.370 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:33.629 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.888 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.888 "name": "pt2", 00:14:33.888 "aliases": [ 00:14:33.888 "00000000-0000-0000-0000-000000000002" 00:14:33.888 ], 00:14:33.888 "product_name": "passthru", 00:14:33.888 "block_size": 512, 00:14:33.888 "num_blocks": 65536, 00:14:33.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.888 "assigned_rate_limits": { 00:14:33.889 "rw_ios_per_sec": 0, 00:14:33.889 "rw_mbytes_per_sec": 0, 00:14:33.889 "r_mbytes_per_sec": 0, 00:14:33.889 "w_mbytes_per_sec": 0 00:14:33.889 }, 00:14:33.889 "claimed": true, 00:14:33.889 "claim_type": "exclusive_write", 00:14:33.889 "zoned": false, 00:14:33.889 "supported_io_types": { 00:14:33.889 "read": true, 00:14:33.889 "write": true, 00:14:33.889 "unmap": true, 00:14:33.889 "write_zeroes": true, 00:14:33.889 "flush": true, 00:14:33.889 "reset": true, 00:14:33.889 "compare": false, 00:14:33.889 "compare_and_write": false, 00:14:33.889 "abort": true, 00:14:33.889 "nvme_admin": false, 00:14:33.889 "nvme_io": false 00:14:33.889 }, 00:14:33.889 "memory_domains": [ 00:14:33.889 { 00:14:33.889 "dma_device_id": "system", 00:14:33.889 "dma_device_type": 1 00:14:33.889 }, 00:14:33.889 { 00:14:33.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.889 "dma_device_type": 2 00:14:33.889 } 00:14:33.889 ], 00:14:33.889 "driver_specific": { 00:14:33.889 "passthru": { 00:14:33.889 "name": "pt2", 00:14:33.889 "base_bdev_name": "malloc2" 00:14:33.889 } 00:14:33.889 } 00:14:33.889 }' 00:14:33.889 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.889 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.889 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:33.889 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:34.148 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:34.408 [2024-06-10 18:58:49.066775] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.408 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' cccb02af-0e92-45ea-a152-6d55d25ab4d6 '!=' cccb02af-0e92-45ea-a152-6d55d25ab4d6 ']' 00:14:34.408 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:34.408 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:34.408 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:34.408 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:34.667 [2024-06-10 18:58:49.299215] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:34.667 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:34.667 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.668 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.927 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.927 "name": "raid_bdev1", 00:14:34.927 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:34.927 "strip_size_kb": 0, 00:14:34.927 "state": "online", 00:14:34.927 "raid_level": "raid1", 00:14:34.927 "superblock": true, 00:14:34.927 "num_base_bdevs": 2, 00:14:34.927 "num_base_bdevs_discovered": 1, 00:14:34.927 "num_base_bdevs_operational": 1, 00:14:34.927 "base_bdevs_list": [ 00:14:34.927 { 00:14:34.927 "name": null, 00:14:34.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.927 "is_configured": false, 00:14:34.927 "data_offset": 2048, 00:14:34.927 "data_size": 63488 00:14:34.927 }, 00:14:34.927 { 00:14:34.927 "name": "pt2", 00:14:34.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.927 "is_configured": true, 00:14:34.927 "data_offset": 2048, 00:14:34.927 "data_size": 63488 00:14:34.927 } 00:14:34.927 ] 00:14:34.927 }' 00:14:34.927 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.927 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.494 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:35.753 [2024-06-10 18:58:50.329946] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.753 [2024-06-10 18:58:50.329970] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.753 [2024-06-10 18:58:50.330015] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.753 [2024-06-10 18:58:50.330053] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.753 [2024-06-10 18:58:50.330064] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b17b90 name raid_bdev1, state offline 00:14:35.753 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:35.753 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.011 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:36.011 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:36.011 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:36.011 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:36.011 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.270 [2024-06-10 18:58:50.975610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.270 [2024-06-10 18:58:50.975649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.270 [2024-06-10 18:58:50.975665] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197f760 00:14:36.270 [2024-06-10 18:58:50.975676] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.270 [2024-06-10 18:58:50.977165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.270 [2024-06-10 18:58:50.977192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.270 [2024-06-10 18:58:50.977248] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:36.270 [2024-06-10 18:58:50.977270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.270 [2024-06-10 18:58:50.977342] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19754c0 00:14:36.270 [2024-06-10 18:58:50.977352] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.270 [2024-06-10 18:58:50.977504] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1975d90 00:14:36.270 [2024-06-10 18:58:50.977619] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19754c0 00:14:36.270 [2024-06-10 18:58:50.977628] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19754c0 00:14:36.270 [2024-06-10 18:58:50.977717] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.270 pt2 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.270 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.270 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.270 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.529 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.529 "name": "raid_bdev1", 00:14:36.529 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:36.529 "strip_size_kb": 0, 00:14:36.529 "state": "online", 00:14:36.529 "raid_level": "raid1", 00:14:36.529 "superblock": true, 00:14:36.529 "num_base_bdevs": 2, 00:14:36.529 "num_base_bdevs_discovered": 1, 00:14:36.529 "num_base_bdevs_operational": 1, 00:14:36.529 "base_bdevs_list": [ 00:14:36.529 { 00:14:36.529 "name": null, 00:14:36.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.529 "is_configured": false, 00:14:36.529 "data_offset": 2048, 00:14:36.529 "data_size": 63488 00:14:36.529 }, 00:14:36.529 { 00:14:36.529 "name": "pt2", 00:14:36.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.529 "is_configured": true, 00:14:36.529 "data_offset": 2048, 00:14:36.529 "data_size": 63488 00:14:36.529 } 00:14:36.529 ] 00:14:36.529 }' 00:14:36.529 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.529 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.097 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.357 [2024-06-10 18:58:52.002302] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.357 [2024-06-10 18:58:52.002326] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.357 [2024-06-10 18:58:52.002372] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.357 [2024-06-10 18:58:52.002408] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.357 [2024-06-10 18:58:52.002418] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19754c0 name raid_bdev1, state offline 00:14:37.357 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.357 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:37.616 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:37.616 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:37.616 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:14:37.616 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.875 [2024-06-10 18:58:52.459485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.875 [2024-06-10 18:58:52.459522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.875 [2024-06-10 18:58:52.459538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b16540 00:14:37.875 [2024-06-10 18:58:52.459550] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.875 [2024-06-10 18:58:52.461032] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.875 [2024-06-10 18:58:52.461058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.875 [2024-06-10 18:58:52.461115] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:37.875 [2024-06-10 18:58:52.461138] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.875 [2024-06-10 18:58:52.461227] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:37.875 [2024-06-10 18:58:52.461239] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.875 [2024-06-10 18:58:52.461251] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x197a070 name raid_bdev1, state configuring 00:14:37.875 [2024-06-10 18:58:52.461272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.875 [2024-06-10 18:58:52.461320] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1976a50 00:14:37.875 [2024-06-10 18:58:52.461330] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.875 [2024-06-10 18:58:52.461480] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b225b0 00:14:37.875 [2024-06-10 18:58:52.461597] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1976a50 00:14:37.875 [2024-06-10 18:58:52.461607] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1976a50 00:14:37.875 [2024-06-10 18:58:52.461700] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.875 pt1 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.875 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.135 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.135 "name": "raid_bdev1", 00:14:38.135 "uuid": "cccb02af-0e92-45ea-a152-6d55d25ab4d6", 00:14:38.135 "strip_size_kb": 0, 00:14:38.135 "state": "online", 00:14:38.135 "raid_level": "raid1", 00:14:38.135 "superblock": true, 00:14:38.135 "num_base_bdevs": 2, 00:14:38.135 "num_base_bdevs_discovered": 1, 00:14:38.135 "num_base_bdevs_operational": 1, 00:14:38.135 "base_bdevs_list": [ 00:14:38.135 { 00:14:38.135 "name": null, 00:14:38.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.135 "is_configured": false, 00:14:38.135 "data_offset": 2048, 00:14:38.135 "data_size": 63488 00:14:38.135 }, 00:14:38.135 { 00:14:38.135 "name": "pt2", 00:14:38.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.135 "is_configured": true, 00:14:38.135 "data_offset": 2048, 00:14:38.135 "data_size": 63488 00:14:38.135 } 00:14:38.135 ] 00:14:38.135 }' 00:14:38.135 18:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.135 18:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.703 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:38.703 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:38.963 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:38.963 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:38.963 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:38.963 [2024-06-10 18:58:53.718966] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' cccb02af-0e92-45ea-a152-6d55d25ab4d6 '!=' cccb02af-0e92-45ea-a152-6d55d25ab4d6 ']' 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1637749 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1637749 ']' 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1637749 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1637749 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1637749' 00:14:39.223 killing process with pid 1637749 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1637749 00:14:39.223 [2024-06-10 18:58:53.792054] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.223 [2024-06-10 18:58:53.792102] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.223 [2024-06-10 18:58:53.792139] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.223 [2024-06-10 18:58:53.792150] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1976a50 name raid_bdev1, state offline 00:14:39.223 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1637749 00:14:39.223 [2024-06-10 18:58:53.808020] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.483 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:39.483 00:14:39.483 real 0m14.617s 00:14:39.483 user 0m26.475s 00:14:39.483 sys 0m2.749s 00:14:39.483 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:39.483 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.483 ************************************ 00:14:39.483 END TEST raid_superblock_test 00:14:39.483 ************************************ 00:14:39.483 18:58:54 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:14:39.483 18:58:54 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:14:39.483 18:58:54 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:39.483 18:58:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.483 ************************************ 00:14:39.483 START TEST raid_read_error_test 00:14:39.483 ************************************ 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 2 read 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IpQzJVwa1V 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1640629 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1640629 /var/tmp/spdk-raid.sock 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1640629 ']' 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:39.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:39.483 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.483 [2024-06-10 18:58:54.160490] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:14:39.483 [2024-06-10 18:58:54.160550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640629 ] 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.0 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.1 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.2 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.3 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.4 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.5 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.6 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:01.7 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.0 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.1 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.2 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.3 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.4 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.5 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.6 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b6:02.7 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.0 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.1 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.2 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.3 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.4 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.5 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.6 cannot be used 00:14:39.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.483 EAL: Requested device 0000:b8:01.7 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.0 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.1 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.2 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.3 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.4 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.5 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.6 cannot be used 00:14:39.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:39.484 EAL: Requested device 0000:b8:02.7 cannot be used 00:14:39.744 [2024-06-10 18:58:54.293420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.744 [2024-06-10 18:58:54.381032] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.744 [2024-06-10 18:58:54.440810] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.744 [2024-06-10 18:58:54.440846] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.681 18:58:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:40.681 18:58:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:14:40.681 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:40.682 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:40.682 BaseBdev1_malloc 00:14:40.682 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:40.941 true 00:14:40.941 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:41.200 [2024-06-10 18:58:55.747377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:41.200 [2024-06-10 18:58:55.747416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.200 [2024-06-10 18:58:55.747434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a9ed50 00:14:41.200 [2024-06-10 18:58:55.747446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.200 [2024-06-10 18:58:55.749031] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.200 [2024-06-10 18:58:55.749057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:41.200 BaseBdev1 00:14:41.200 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:41.200 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:41.460 BaseBdev2_malloc 00:14:41.460 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:41.460 true 00:14:41.719 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:41.719 [2024-06-10 18:58:56.425533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:41.719 [2024-06-10 18:58:56.425571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.719 [2024-06-10 18:58:56.425594] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aa42e0 00:14:41.719 [2024-06-10 18:58:56.425606] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.719 [2024-06-10 18:58:56.426964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.719 [2024-06-10 18:58:56.426990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:41.719 BaseBdev2 00:14:41.719 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:41.979 [2024-06-10 18:58:56.654152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.979 [2024-06-10 18:58:56.655309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.979 [2024-06-10 18:58:56.655484] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1aa5640 00:14:41.979 [2024-06-10 18:58:56.655497] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:41.979 [2024-06-10 18:58:56.655680] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f9d90 00:14:41.979 [2024-06-10 18:58:56.655817] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1aa5640 00:14:41.979 [2024-06-10 18:58:56.655826] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1aa5640 00:14:41.979 [2024-06-10 18:58:56.655918] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.979 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.238 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.238 "name": "raid_bdev1", 00:14:42.238 "uuid": "5cf75580-c629-4a79-8a3b-628e13b1a0d4", 00:14:42.238 "strip_size_kb": 0, 00:14:42.238 "state": "online", 00:14:42.238 "raid_level": "raid1", 00:14:42.238 "superblock": true, 00:14:42.238 "num_base_bdevs": 2, 00:14:42.238 "num_base_bdevs_discovered": 2, 00:14:42.238 "num_base_bdevs_operational": 2, 00:14:42.238 "base_bdevs_list": [ 00:14:42.238 { 00:14:42.238 "name": "BaseBdev1", 00:14:42.238 "uuid": "3d6a3d8a-c7eb-5d30-884c-edc79dd5ca4e", 00:14:42.238 "is_configured": true, 00:14:42.238 "data_offset": 2048, 00:14:42.238 "data_size": 63488 00:14:42.238 }, 00:14:42.238 { 00:14:42.238 "name": "BaseBdev2", 00:14:42.238 "uuid": "3defded1-1e48-5cd1-8595-d15144317140", 00:14:42.238 "is_configured": true, 00:14:42.238 "data_offset": 2048, 00:14:42.238 "data_size": 63488 00:14:42.238 } 00:14:42.238 ] 00:14:42.238 }' 00:14:42.238 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.238 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.806 18:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:42.807 18:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:43.065 [2024-06-10 18:58:57.580815] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aa0890 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.005 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.266 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.266 "name": "raid_bdev1", 00:14:44.266 "uuid": "5cf75580-c629-4a79-8a3b-628e13b1a0d4", 00:14:44.266 "strip_size_kb": 0, 00:14:44.266 "state": "online", 00:14:44.266 "raid_level": "raid1", 00:14:44.266 "superblock": true, 00:14:44.266 "num_base_bdevs": 2, 00:14:44.266 "num_base_bdevs_discovered": 2, 00:14:44.266 "num_base_bdevs_operational": 2, 00:14:44.266 "base_bdevs_list": [ 00:14:44.266 { 00:14:44.266 "name": "BaseBdev1", 00:14:44.266 "uuid": "3d6a3d8a-c7eb-5d30-884c-edc79dd5ca4e", 00:14:44.266 "is_configured": true, 00:14:44.266 "data_offset": 2048, 00:14:44.266 "data_size": 63488 00:14:44.266 }, 00:14:44.266 { 00:14:44.266 "name": "BaseBdev2", 00:14:44.266 "uuid": "3defded1-1e48-5cd1-8595-d15144317140", 00:14:44.266 "is_configured": true, 00:14:44.266 "data_offset": 2048, 00:14:44.266 "data_size": 63488 00:14:44.266 } 00:14:44.266 ] 00:14:44.266 }' 00:14:44.266 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.266 18:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.862 18:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:45.145 [2024-06-10 18:58:59.721119] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.145 [2024-06-10 18:58:59.721160] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.145 [2024-06-10 18:58:59.724061] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.145 [2024-06-10 18:58:59.724088] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.146 [2024-06-10 18:58:59.724152] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.146 [2024-06-10 18:58:59.724162] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aa5640 name raid_bdev1, state offline 00:14:45.146 0 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1640629 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1640629 ']' 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1640629 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1640629 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1640629' 00:14:45.146 killing process with pid 1640629 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1640629 00:14:45.146 [2024-06-10 18:58:59.797110] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.146 18:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1640629 00:14:45.146 [2024-06-10 18:58:59.806862] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IpQzJVwa1V 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:45.406 00:14:45.406 real 0m5.927s 00:14:45.406 user 0m9.209s 00:14:45.406 sys 0m1.040s 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:45.406 18:59:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.406 ************************************ 00:14:45.406 END TEST raid_read_error_test 00:14:45.406 ************************************ 00:14:45.406 18:59:00 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:14:45.406 18:59:00 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:14:45.406 18:59:00 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:45.406 18:59:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.406 ************************************ 00:14:45.406 START TEST raid_write_error_test 00:14:45.406 ************************************ 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 2 write 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qag6IwJWtS 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1641701 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1641701 /var/tmp/spdk-raid.sock 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1641701 ']' 00:14:45.406 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:45.407 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:45.407 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:45.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:45.407 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:45.407 18:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.407 [2024-06-10 18:59:00.151791] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:14:45.407 [2024-06-10 18:59:00.151834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641701 ] 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.0 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.1 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.2 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.3 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.4 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.5 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.6 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:01.7 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.0 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.1 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.2 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.3 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.4 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.5 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.6 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b6:02.7 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.0 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.1 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.2 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.3 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.4 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.5 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.6 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:01.7 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.0 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.1 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.2 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.3 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.4 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.5 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.6 cannot be used 00:14:45.666 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.666 EAL: Requested device 0000:b8:02.7 cannot be used 00:14:45.666 [2024-06-10 18:59:00.262571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.666 [2024-06-10 18:59:00.345426] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.666 [2024-06-10 18:59:00.413023] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.666 [2024-06-10 18:59:00.413060] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.602 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:46.602 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:14:46.603 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:46.603 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:46.603 BaseBdev1_malloc 00:14:46.603 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:46.862 true 00:14:46.862 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:47.121 [2024-06-10 18:59:01.738173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:47.121 [2024-06-10 18:59:01.738214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.121 [2024-06-10 18:59:01.738231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x247fd50 00:14:47.121 [2024-06-10 18:59:01.738243] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.121 [2024-06-10 18:59:01.739762] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.121 [2024-06-10 18:59:01.739791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.121 BaseBdev1 00:14:47.121 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:47.121 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:47.381 BaseBdev2_malloc 00:14:47.381 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:47.640 true 00:14:47.640 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:47.900 [2024-06-10 18:59:02.416163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:47.900 [2024-06-10 18:59:02.416208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.900 [2024-06-10 18:59:02.416224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24852e0 00:14:47.900 [2024-06-10 18:59:02.416236] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.900 [2024-06-10 18:59:02.417501] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.900 [2024-06-10 18:59:02.417528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:47.900 BaseBdev2 00:14:47.900 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:47.900 [2024-06-10 18:59:02.640771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.900 [2024-06-10 18:59:02.641832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.900 [2024-06-10 18:59:02.642005] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2486640 00:14:47.900 [2024-06-10 18:59:02.642017] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.900 [2024-06-10 18:59:02.642174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22dad90 00:14:47.900 [2024-06-10 18:59:02.642305] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2486640 00:14:47.900 [2024-06-10 18:59:02.642315] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2486640 00:14:47.900 [2024-06-10 18:59:02.642403] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:48.159 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.160 "name": "raid_bdev1", 00:14:48.160 "uuid": "722d9779-75e7-44fa-bc3f-15a1920e0957", 00:14:48.160 "strip_size_kb": 0, 00:14:48.160 "state": "online", 00:14:48.160 "raid_level": "raid1", 00:14:48.160 "superblock": true, 00:14:48.160 "num_base_bdevs": 2, 00:14:48.160 "num_base_bdevs_discovered": 2, 00:14:48.160 "num_base_bdevs_operational": 2, 00:14:48.160 "base_bdevs_list": [ 00:14:48.160 { 00:14:48.160 "name": "BaseBdev1", 00:14:48.160 "uuid": "75aa9d44-048e-5fed-b4b3-3081ec28a1f0", 00:14:48.160 "is_configured": true, 00:14:48.160 "data_offset": 2048, 00:14:48.160 "data_size": 63488 00:14:48.160 }, 00:14:48.160 { 00:14:48.160 "name": "BaseBdev2", 00:14:48.160 "uuid": "ae729524-5b82-5082-85c2-a1f99d17f838", 00:14:48.160 "is_configured": true, 00:14:48.160 "data_offset": 2048, 00:14:48.160 "data_size": 63488 00:14:48.160 } 00:14:48.160 ] 00:14:48.160 }' 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.160 18:59:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.728 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:48.728 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:48.987 [2024-06-10 18:59:03.563438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2481890 00:14:49.925 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:49.925 [2024-06-10 18:59:04.681866] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:49.925 [2024-06-10 18:59:04.681919] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.925 [2024-06-10 18:59:04.682090] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2481890 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.185 "name": "raid_bdev1", 00:14:50.185 "uuid": "722d9779-75e7-44fa-bc3f-15a1920e0957", 00:14:50.185 "strip_size_kb": 0, 00:14:50.185 "state": "online", 00:14:50.185 "raid_level": "raid1", 00:14:50.185 "superblock": true, 00:14:50.185 "num_base_bdevs": 2, 00:14:50.185 "num_base_bdevs_discovered": 1, 00:14:50.185 "num_base_bdevs_operational": 1, 00:14:50.185 "base_bdevs_list": [ 00:14:50.185 { 00:14:50.185 "name": null, 00:14:50.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.185 "is_configured": false, 00:14:50.185 "data_offset": 2048, 00:14:50.185 "data_size": 63488 00:14:50.185 }, 00:14:50.185 { 00:14:50.185 "name": "BaseBdev2", 00:14:50.185 "uuid": "ae729524-5b82-5082-85c2-a1f99d17f838", 00:14:50.185 "is_configured": true, 00:14:50.185 "data_offset": 2048, 00:14:50.185 "data_size": 63488 00:14:50.185 } 00:14:50.185 ] 00:14:50.185 }' 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.185 18:59:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.775 18:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:51.035 [2024-06-10 18:59:05.728618] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.035 [2024-06-10 18:59:05.728652] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.035 [2024-06-10 18:59:05.731522] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.035 [2024-06-10 18:59:05.731550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.035 [2024-06-10 18:59:05.731601] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.035 [2024-06-10 18:59:05.731611] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2486640 name raid_bdev1, state offline 00:14:51.035 0 00:14:51.035 18:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1641701 00:14:51.035 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1641701 ']' 00:14:51.035 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1641701 00:14:51.035 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:14:51.035 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:51.035 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1641701 00:14:51.297 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:51.297 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:51.297 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1641701' 00:14:51.297 killing process with pid 1641701 00:14:51.297 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1641701 00:14:51.297 [2024-06-10 18:59:05.804221] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.297 18:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1641701 00:14:51.297 [2024-06-10 18:59:05.812948] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qag6IwJWtS 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:51.297 00:14:51.297 real 0m5.921s 00:14:51.297 user 0m9.247s 00:14:51.297 sys 0m0.995s 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:51.297 18:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.297 ************************************ 00:14:51.297 END TEST raid_write_error_test 00:14:51.297 ************************************ 00:14:51.557 18:59:06 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:51.557 18:59:06 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:51.557 18:59:06 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:51.557 18:59:06 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:14:51.557 18:59:06 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:51.557 18:59:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.557 ************************************ 00:14:51.557 START TEST raid_state_function_test 00:14:51.557 ************************************ 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 3 false 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1642822 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1642822' 00:14:51.557 Process raid pid: 1642822 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1642822 /var/tmp/spdk-raid.sock 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1642822 ']' 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:51.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:51.557 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.557 [2024-06-10 18:59:06.153594] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:14:51.557 [2024-06-10 18:59:06.153649] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.0 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.1 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.2 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.3 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.4 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.5 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.6 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:01.7 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:02.0 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:02.1 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:02.2 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:02.3 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:02.4 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.557 EAL: Requested device 0000:b6:02.5 cannot be used 00:14:51.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b6:02.6 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b6:02.7 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.0 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.1 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.2 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.3 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.4 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.5 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.6 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:01.7 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.0 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.1 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.2 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.3 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.4 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.5 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.6 cannot be used 00:14:51.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:51.558 EAL: Requested device 0000:b8:02.7 cannot be used 00:14:51.558 [2024-06-10 18:59:06.290051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.817 [2024-06-10 18:59:06.377915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.817 [2024-06-10 18:59:06.444261] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.817 [2024-06-10 18:59:06.444293] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.386 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:52.386 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:14:52.386 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:52.386 [2024-06-10 18:59:07.111470] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.386 [2024-06-10 18:59:07.111508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.386 [2024-06-10 18:59:07.111518] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.386 [2024-06-10 18:59:07.111534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.386 [2024-06-10 18:59:07.111542] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.386 [2024-06-10 18:59:07.111552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.386 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.645 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.645 "name": "Existed_Raid", 00:14:52.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.645 "strip_size_kb": 64, 00:14:52.645 "state": "configuring", 00:14:52.645 "raid_level": "raid0", 00:14:52.645 "superblock": false, 00:14:52.645 "num_base_bdevs": 3, 00:14:52.645 "num_base_bdevs_discovered": 0, 00:14:52.645 "num_base_bdevs_operational": 3, 00:14:52.645 "base_bdevs_list": [ 00:14:52.645 { 00:14:52.645 "name": "BaseBdev1", 00:14:52.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.645 "is_configured": false, 00:14:52.645 "data_offset": 0, 00:14:52.645 "data_size": 0 00:14:52.645 }, 00:14:52.645 { 00:14:52.645 "name": "BaseBdev2", 00:14:52.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.645 "is_configured": false, 00:14:52.645 "data_offset": 0, 00:14:52.645 "data_size": 0 00:14:52.645 }, 00:14:52.645 { 00:14:52.645 "name": "BaseBdev3", 00:14:52.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.645 "is_configured": false, 00:14:52.645 "data_offset": 0, 00:14:52.645 "data_size": 0 00:14:52.645 } 00:14:52.645 ] 00:14:52.645 }' 00:14:52.645 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.645 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.213 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:53.472 [2024-06-10 18:59:08.037790] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.472 [2024-06-10 18:59:08.037814] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15c4f30 name Existed_Raid, state configuring 00:14:53.472 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:53.731 [2024-06-10 18:59:08.258389] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.731 [2024-06-10 18:59:08.258420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.731 [2024-06-10 18:59:08.258429] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.731 [2024-06-10 18:59:08.258440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.731 [2024-06-10 18:59:08.258448] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.731 [2024-06-10 18:59:08.258459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:53.731 [2024-06-10 18:59:08.428341] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.731 BaseBdev1 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:53.731 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.990 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.249 [ 00:14:54.249 { 00:14:54.249 "name": "BaseBdev1", 00:14:54.249 "aliases": [ 00:14:54.249 "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6" 00:14:54.249 ], 00:14:54.249 "product_name": "Malloc disk", 00:14:54.249 "block_size": 512, 00:14:54.249 "num_blocks": 65536, 00:14:54.249 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:54.249 "assigned_rate_limits": { 00:14:54.249 "rw_ios_per_sec": 0, 00:14:54.249 "rw_mbytes_per_sec": 0, 00:14:54.249 "r_mbytes_per_sec": 0, 00:14:54.249 "w_mbytes_per_sec": 0 00:14:54.249 }, 00:14:54.249 "claimed": true, 00:14:54.249 "claim_type": "exclusive_write", 00:14:54.249 "zoned": false, 00:14:54.249 "supported_io_types": { 00:14:54.249 "read": true, 00:14:54.249 "write": true, 00:14:54.249 "unmap": true, 00:14:54.249 "write_zeroes": true, 00:14:54.249 "flush": true, 00:14:54.249 "reset": true, 00:14:54.249 "compare": false, 00:14:54.249 "compare_and_write": false, 00:14:54.249 "abort": true, 00:14:54.249 "nvme_admin": false, 00:14:54.249 "nvme_io": false 00:14:54.249 }, 00:14:54.249 "memory_domains": [ 00:14:54.249 { 00:14:54.249 "dma_device_id": "system", 00:14:54.249 "dma_device_type": 1 00:14:54.249 }, 00:14:54.249 { 00:14:54.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.249 "dma_device_type": 2 00:14:54.249 } 00:14:54.249 ], 00:14:54.249 "driver_specific": {} 00:14:54.249 } 00:14:54.249 ] 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.249 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.508 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.508 "name": "Existed_Raid", 00:14:54.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.508 "strip_size_kb": 64, 00:14:54.508 "state": "configuring", 00:14:54.508 "raid_level": "raid0", 00:14:54.508 "superblock": false, 00:14:54.508 "num_base_bdevs": 3, 00:14:54.508 "num_base_bdevs_discovered": 1, 00:14:54.508 "num_base_bdevs_operational": 3, 00:14:54.508 "base_bdevs_list": [ 00:14:54.508 { 00:14:54.508 "name": "BaseBdev1", 00:14:54.508 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:54.508 "is_configured": true, 00:14:54.508 "data_offset": 0, 00:14:54.508 "data_size": 65536 00:14:54.508 }, 00:14:54.508 { 00:14:54.508 "name": "BaseBdev2", 00:14:54.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.508 "is_configured": false, 00:14:54.508 "data_offset": 0, 00:14:54.508 "data_size": 0 00:14:54.508 }, 00:14:54.508 { 00:14:54.508 "name": "BaseBdev3", 00:14:54.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.508 "is_configured": false, 00:14:54.508 "data_offset": 0, 00:14:54.508 "data_size": 0 00:14:54.508 } 00:14:54.508 ] 00:14:54.508 }' 00:14:54.508 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.508 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.075 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:55.075 [2024-06-10 18:59:09.779785] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.075 [2024-06-10 18:59:09.779820] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15c4800 name Existed_Raid, state configuring 00:14:55.075 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:55.334 [2024-06-10 18:59:10.004404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.334 [2024-06-10 18:59:10.005800] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.334 [2024-06-10 18:59:10.005831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.334 [2024-06-10 18:59:10.005841] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.334 [2024-06-10 18:59:10.005852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.334 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.335 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.335 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.593 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.593 "name": "Existed_Raid", 00:14:55.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.594 "strip_size_kb": 64, 00:14:55.594 "state": "configuring", 00:14:55.594 "raid_level": "raid0", 00:14:55.594 "superblock": false, 00:14:55.594 "num_base_bdevs": 3, 00:14:55.594 "num_base_bdevs_discovered": 1, 00:14:55.594 "num_base_bdevs_operational": 3, 00:14:55.594 "base_bdevs_list": [ 00:14:55.594 { 00:14:55.594 "name": "BaseBdev1", 00:14:55.594 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:55.594 "is_configured": true, 00:14:55.594 "data_offset": 0, 00:14:55.594 "data_size": 65536 00:14:55.594 }, 00:14:55.594 { 00:14:55.594 "name": "BaseBdev2", 00:14:55.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.594 "is_configured": false, 00:14:55.594 "data_offset": 0, 00:14:55.594 "data_size": 0 00:14:55.594 }, 00:14:55.594 { 00:14:55.594 "name": "BaseBdev3", 00:14:55.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.594 "is_configured": false, 00:14:55.594 "data_offset": 0, 00:14:55.594 "data_size": 0 00:14:55.594 } 00:14:55.594 ] 00:14:55.594 }' 00:14:55.594 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.594 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.162 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.421 [2024-06-10 18:59:11.022279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.421 BaseBdev2 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:56.421 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.680 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.940 [ 00:14:56.940 { 00:14:56.940 "name": "BaseBdev2", 00:14:56.940 "aliases": [ 00:14:56.940 "cf368660-a051-415a-b35d-5de5115ccf37" 00:14:56.940 ], 00:14:56.940 "product_name": "Malloc disk", 00:14:56.940 "block_size": 512, 00:14:56.940 "num_blocks": 65536, 00:14:56.940 "uuid": "cf368660-a051-415a-b35d-5de5115ccf37", 00:14:56.940 "assigned_rate_limits": { 00:14:56.940 "rw_ios_per_sec": 0, 00:14:56.940 "rw_mbytes_per_sec": 0, 00:14:56.940 "r_mbytes_per_sec": 0, 00:14:56.940 "w_mbytes_per_sec": 0 00:14:56.940 }, 00:14:56.940 "claimed": true, 00:14:56.940 "claim_type": "exclusive_write", 00:14:56.940 "zoned": false, 00:14:56.940 "supported_io_types": { 00:14:56.940 "read": true, 00:14:56.940 "write": true, 00:14:56.940 "unmap": true, 00:14:56.940 "write_zeroes": true, 00:14:56.940 "flush": true, 00:14:56.940 "reset": true, 00:14:56.940 "compare": false, 00:14:56.940 "compare_and_write": false, 00:14:56.940 "abort": true, 00:14:56.940 "nvme_admin": false, 00:14:56.940 "nvme_io": false 00:14:56.940 }, 00:14:56.940 "memory_domains": [ 00:14:56.940 { 00:14:56.940 "dma_device_id": "system", 00:14:56.940 "dma_device_type": 1 00:14:56.940 }, 00:14:56.940 { 00:14:56.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.940 "dma_device_type": 2 00:14:56.940 } 00:14:56.940 ], 00:14:56.940 "driver_specific": {} 00:14:56.940 } 00:14:56.940 ] 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.940 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.200 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.200 "name": "Existed_Raid", 00:14:57.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.200 "strip_size_kb": 64, 00:14:57.200 "state": "configuring", 00:14:57.200 "raid_level": "raid0", 00:14:57.200 "superblock": false, 00:14:57.200 "num_base_bdevs": 3, 00:14:57.200 "num_base_bdevs_discovered": 2, 00:14:57.200 "num_base_bdevs_operational": 3, 00:14:57.200 "base_bdevs_list": [ 00:14:57.200 { 00:14:57.200 "name": "BaseBdev1", 00:14:57.200 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:57.200 "is_configured": true, 00:14:57.200 "data_offset": 0, 00:14:57.200 "data_size": 65536 00:14:57.200 }, 00:14:57.200 { 00:14:57.200 "name": "BaseBdev2", 00:14:57.200 "uuid": "cf368660-a051-415a-b35d-5de5115ccf37", 00:14:57.200 "is_configured": true, 00:14:57.200 "data_offset": 0, 00:14:57.200 "data_size": 65536 00:14:57.200 }, 00:14:57.200 { 00:14:57.200 "name": "BaseBdev3", 00:14:57.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.200 "is_configured": false, 00:14:57.200 "data_offset": 0, 00:14:57.200 "data_size": 0 00:14:57.200 } 00:14:57.200 ] 00:14:57.200 }' 00:14:57.200 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.200 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.768 [2024-06-10 18:59:12.497400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.768 [2024-06-10 18:59:12.497432] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15c56f0 00:14:57.768 [2024-06-10 18:59:12.497440] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:57.768 [2024-06-10 18:59:12.497625] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15c53c0 00:14:57.768 [2024-06-10 18:59:12.497735] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15c56f0 00:14:57.768 [2024-06-10 18:59:12.497744] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15c56f0 00:14:57.768 [2024-06-10 18:59:12.497886] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.768 BaseBdev3 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:57.768 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.027 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.286 [ 00:14:58.286 { 00:14:58.286 "name": "BaseBdev3", 00:14:58.286 "aliases": [ 00:14:58.286 "416935b3-fd26-49a6-b4e2-5801902efed3" 00:14:58.286 ], 00:14:58.286 "product_name": "Malloc disk", 00:14:58.286 "block_size": 512, 00:14:58.286 "num_blocks": 65536, 00:14:58.286 "uuid": "416935b3-fd26-49a6-b4e2-5801902efed3", 00:14:58.286 "assigned_rate_limits": { 00:14:58.286 "rw_ios_per_sec": 0, 00:14:58.286 "rw_mbytes_per_sec": 0, 00:14:58.286 "r_mbytes_per_sec": 0, 00:14:58.286 "w_mbytes_per_sec": 0 00:14:58.286 }, 00:14:58.286 "claimed": true, 00:14:58.286 "claim_type": "exclusive_write", 00:14:58.286 "zoned": false, 00:14:58.286 "supported_io_types": { 00:14:58.286 "read": true, 00:14:58.286 "write": true, 00:14:58.286 "unmap": true, 00:14:58.286 "write_zeroes": true, 00:14:58.286 "flush": true, 00:14:58.286 "reset": true, 00:14:58.286 "compare": false, 00:14:58.286 "compare_and_write": false, 00:14:58.286 "abort": true, 00:14:58.286 "nvme_admin": false, 00:14:58.286 "nvme_io": false 00:14:58.286 }, 00:14:58.286 "memory_domains": [ 00:14:58.286 { 00:14:58.286 "dma_device_id": "system", 00:14:58.286 "dma_device_type": 1 00:14:58.286 }, 00:14:58.286 { 00:14:58.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.286 "dma_device_type": 2 00:14:58.286 } 00:14:58.286 ], 00:14:58.286 "driver_specific": {} 00:14:58.286 } 00:14:58.286 ] 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.286 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.544 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.544 "name": "Existed_Raid", 00:14:58.544 "uuid": "1b39b829-2033-4b2d-ae7f-e38e2c1e10d7", 00:14:58.544 "strip_size_kb": 64, 00:14:58.544 "state": "online", 00:14:58.544 "raid_level": "raid0", 00:14:58.544 "superblock": false, 00:14:58.544 "num_base_bdevs": 3, 00:14:58.545 "num_base_bdevs_discovered": 3, 00:14:58.545 "num_base_bdevs_operational": 3, 00:14:58.545 "base_bdevs_list": [ 00:14:58.545 { 00:14:58.545 "name": "BaseBdev1", 00:14:58.545 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:58.545 "is_configured": true, 00:14:58.545 "data_offset": 0, 00:14:58.545 "data_size": 65536 00:14:58.545 }, 00:14:58.545 { 00:14:58.545 "name": "BaseBdev2", 00:14:58.545 "uuid": "cf368660-a051-415a-b35d-5de5115ccf37", 00:14:58.545 "is_configured": true, 00:14:58.545 "data_offset": 0, 00:14:58.545 "data_size": 65536 00:14:58.545 }, 00:14:58.545 { 00:14:58.545 "name": "BaseBdev3", 00:14:58.545 "uuid": "416935b3-fd26-49a6-b4e2-5801902efed3", 00:14:58.545 "is_configured": true, 00:14:58.545 "data_offset": 0, 00:14:58.545 "data_size": 65536 00:14:58.545 } 00:14:58.545 ] 00:14:58.545 }' 00:14:58.545 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.545 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:59.112 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:59.370 [2024-06-10 18:59:13.985566] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.370 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:59.370 "name": "Existed_Raid", 00:14:59.370 "aliases": [ 00:14:59.370 "1b39b829-2033-4b2d-ae7f-e38e2c1e10d7" 00:14:59.370 ], 00:14:59.370 "product_name": "Raid Volume", 00:14:59.370 "block_size": 512, 00:14:59.370 "num_blocks": 196608, 00:14:59.370 "uuid": "1b39b829-2033-4b2d-ae7f-e38e2c1e10d7", 00:14:59.370 "assigned_rate_limits": { 00:14:59.370 "rw_ios_per_sec": 0, 00:14:59.370 "rw_mbytes_per_sec": 0, 00:14:59.370 "r_mbytes_per_sec": 0, 00:14:59.370 "w_mbytes_per_sec": 0 00:14:59.370 }, 00:14:59.370 "claimed": false, 00:14:59.370 "zoned": false, 00:14:59.370 "supported_io_types": { 00:14:59.370 "read": true, 00:14:59.370 "write": true, 00:14:59.370 "unmap": true, 00:14:59.370 "write_zeroes": true, 00:14:59.370 "flush": true, 00:14:59.370 "reset": true, 00:14:59.370 "compare": false, 00:14:59.370 "compare_and_write": false, 00:14:59.370 "abort": false, 00:14:59.371 "nvme_admin": false, 00:14:59.371 "nvme_io": false 00:14:59.371 }, 00:14:59.371 "memory_domains": [ 00:14:59.371 { 00:14:59.371 "dma_device_id": "system", 00:14:59.371 "dma_device_type": 1 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.371 "dma_device_type": 2 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "dma_device_id": "system", 00:14:59.371 "dma_device_type": 1 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.371 "dma_device_type": 2 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "dma_device_id": "system", 00:14:59.371 "dma_device_type": 1 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.371 "dma_device_type": 2 00:14:59.371 } 00:14:59.371 ], 00:14:59.371 "driver_specific": { 00:14:59.371 "raid": { 00:14:59.371 "uuid": "1b39b829-2033-4b2d-ae7f-e38e2c1e10d7", 00:14:59.371 "strip_size_kb": 64, 00:14:59.371 "state": "online", 00:14:59.371 "raid_level": "raid0", 00:14:59.371 "superblock": false, 00:14:59.371 "num_base_bdevs": 3, 00:14:59.371 "num_base_bdevs_discovered": 3, 00:14:59.371 "num_base_bdevs_operational": 3, 00:14:59.371 "base_bdevs_list": [ 00:14:59.371 { 00:14:59.371 "name": "BaseBdev1", 00:14:59.371 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:59.371 "is_configured": true, 00:14:59.371 "data_offset": 0, 00:14:59.371 "data_size": 65536 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "name": "BaseBdev2", 00:14:59.371 "uuid": "cf368660-a051-415a-b35d-5de5115ccf37", 00:14:59.371 "is_configured": true, 00:14:59.371 "data_offset": 0, 00:14:59.371 "data_size": 65536 00:14:59.371 }, 00:14:59.371 { 00:14:59.371 "name": "BaseBdev3", 00:14:59.371 "uuid": "416935b3-fd26-49a6-b4e2-5801902efed3", 00:14:59.371 "is_configured": true, 00:14:59.371 "data_offset": 0, 00:14:59.371 "data_size": 65536 00:14:59.371 } 00:14:59.371 ] 00:14:59.371 } 00:14:59.371 } 00:14:59.371 }' 00:14:59.371 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.371 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:59.371 BaseBdev2 00:14:59.371 BaseBdev3' 00:14:59.371 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:59.371 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:59.371 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:59.630 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:59.630 "name": "BaseBdev1", 00:14:59.630 "aliases": [ 00:14:59.630 "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6" 00:14:59.630 ], 00:14:59.630 "product_name": "Malloc disk", 00:14:59.630 "block_size": 512, 00:14:59.630 "num_blocks": 65536, 00:14:59.630 "uuid": "66c645b8-3f6c-4d47-95f7-9b86cd0ad2a6", 00:14:59.630 "assigned_rate_limits": { 00:14:59.630 "rw_ios_per_sec": 0, 00:14:59.630 "rw_mbytes_per_sec": 0, 00:14:59.630 "r_mbytes_per_sec": 0, 00:14:59.630 "w_mbytes_per_sec": 0 00:14:59.630 }, 00:14:59.630 "claimed": true, 00:14:59.630 "claim_type": "exclusive_write", 00:14:59.630 "zoned": false, 00:14:59.630 "supported_io_types": { 00:14:59.630 "read": true, 00:14:59.630 "write": true, 00:14:59.630 "unmap": true, 00:14:59.630 "write_zeroes": true, 00:14:59.630 "flush": true, 00:14:59.630 "reset": true, 00:14:59.630 "compare": false, 00:14:59.630 "compare_and_write": false, 00:14:59.630 "abort": true, 00:14:59.630 "nvme_admin": false, 00:14:59.630 "nvme_io": false 00:14:59.630 }, 00:14:59.630 "memory_domains": [ 00:14:59.630 { 00:14:59.630 "dma_device_id": "system", 00:14:59.630 "dma_device_type": 1 00:14:59.630 }, 00:14:59.630 { 00:14:59.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.630 "dma_device_type": 2 00:14:59.630 } 00:14:59.630 ], 00:14:59.630 "driver_specific": {} 00:14:59.630 }' 00:14:59.630 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:59.630 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:59.630 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:59.630 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:59.889 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.148 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.148 "name": "BaseBdev2", 00:15:00.148 "aliases": [ 00:15:00.148 "cf368660-a051-415a-b35d-5de5115ccf37" 00:15:00.148 ], 00:15:00.148 "product_name": "Malloc disk", 00:15:00.148 "block_size": 512, 00:15:00.148 "num_blocks": 65536, 00:15:00.148 "uuid": "cf368660-a051-415a-b35d-5de5115ccf37", 00:15:00.148 "assigned_rate_limits": { 00:15:00.148 "rw_ios_per_sec": 0, 00:15:00.148 "rw_mbytes_per_sec": 0, 00:15:00.148 "r_mbytes_per_sec": 0, 00:15:00.148 "w_mbytes_per_sec": 0 00:15:00.148 }, 00:15:00.148 "claimed": true, 00:15:00.148 "claim_type": "exclusive_write", 00:15:00.148 "zoned": false, 00:15:00.148 "supported_io_types": { 00:15:00.148 "read": true, 00:15:00.148 "write": true, 00:15:00.148 "unmap": true, 00:15:00.148 "write_zeroes": true, 00:15:00.148 "flush": true, 00:15:00.148 "reset": true, 00:15:00.148 "compare": false, 00:15:00.148 "compare_and_write": false, 00:15:00.148 "abort": true, 00:15:00.148 "nvme_admin": false, 00:15:00.148 "nvme_io": false 00:15:00.148 }, 00:15:00.148 "memory_domains": [ 00:15:00.148 { 00:15:00.148 "dma_device_id": "system", 00:15:00.148 "dma_device_type": 1 00:15:00.148 }, 00:15:00.148 { 00:15:00.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.148 "dma_device_type": 2 00:15:00.148 } 00:15:00.148 ], 00:15:00.148 "driver_specific": {} 00:15:00.148 }' 00:15:00.148 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.148 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.407 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.407 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.407 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.407 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.407 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.407 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.407 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.407 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.407 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.666 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.666 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.666 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:00.666 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.666 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.666 "name": "BaseBdev3", 00:15:00.666 "aliases": [ 00:15:00.666 "416935b3-fd26-49a6-b4e2-5801902efed3" 00:15:00.666 ], 00:15:00.666 "product_name": "Malloc disk", 00:15:00.666 "block_size": 512, 00:15:00.666 "num_blocks": 65536, 00:15:00.666 "uuid": "416935b3-fd26-49a6-b4e2-5801902efed3", 00:15:00.666 "assigned_rate_limits": { 00:15:00.666 "rw_ios_per_sec": 0, 00:15:00.666 "rw_mbytes_per_sec": 0, 00:15:00.666 "r_mbytes_per_sec": 0, 00:15:00.666 "w_mbytes_per_sec": 0 00:15:00.666 }, 00:15:00.666 "claimed": true, 00:15:00.666 "claim_type": "exclusive_write", 00:15:00.666 "zoned": false, 00:15:00.666 "supported_io_types": { 00:15:00.666 "read": true, 00:15:00.666 "write": true, 00:15:00.666 "unmap": true, 00:15:00.666 "write_zeroes": true, 00:15:00.666 "flush": true, 00:15:00.666 "reset": true, 00:15:00.666 "compare": false, 00:15:00.666 "compare_and_write": false, 00:15:00.666 "abort": true, 00:15:00.666 "nvme_admin": false, 00:15:00.666 "nvme_io": false 00:15:00.666 }, 00:15:00.666 "memory_domains": [ 00:15:00.666 { 00:15:00.666 "dma_device_id": "system", 00:15:00.666 "dma_device_type": 1 00:15:00.666 }, 00:15:00.666 { 00:15:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.666 "dma_device_type": 2 00:15:00.666 } 00:15:00.666 ], 00:15:00.666 "driver_specific": {} 00:15:00.666 }' 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.925 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.183 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.183 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.183 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.183 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.183 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:01.442 [2024-06-10 18:59:15.978632] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.442 [2024-06-10 18:59:15.978655] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.442 [2024-06-10 18:59:15.978692] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.442 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.701 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.701 "name": "Existed_Raid", 00:15:01.701 "uuid": "1b39b829-2033-4b2d-ae7f-e38e2c1e10d7", 00:15:01.701 "strip_size_kb": 64, 00:15:01.701 "state": "offline", 00:15:01.701 "raid_level": "raid0", 00:15:01.701 "superblock": false, 00:15:01.701 "num_base_bdevs": 3, 00:15:01.701 "num_base_bdevs_discovered": 2, 00:15:01.701 "num_base_bdevs_operational": 2, 00:15:01.701 "base_bdevs_list": [ 00:15:01.701 { 00:15:01.701 "name": null, 00:15:01.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.701 "is_configured": false, 00:15:01.701 "data_offset": 0, 00:15:01.701 "data_size": 65536 00:15:01.701 }, 00:15:01.701 { 00:15:01.701 "name": "BaseBdev2", 00:15:01.701 "uuid": "cf368660-a051-415a-b35d-5de5115ccf37", 00:15:01.701 "is_configured": true, 00:15:01.701 "data_offset": 0, 00:15:01.701 "data_size": 65536 00:15:01.701 }, 00:15:01.701 { 00:15:01.701 "name": "BaseBdev3", 00:15:01.701 "uuid": "416935b3-fd26-49a6-b4e2-5801902efed3", 00:15:01.701 "is_configured": true, 00:15:01.701 "data_offset": 0, 00:15:01.701 "data_size": 65536 00:15:01.701 } 00:15:01.701 ] 00:15:01.701 }' 00:15:01.701 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.701 18:59:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.269 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:02.269 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:02.269 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.269 18:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:02.269 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:02.269 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.269 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:02.527 [2024-06-10 18:59:17.227007] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.527 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:02.527 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:02.527 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.527 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:02.786 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:02.786 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.786 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:03.044 [2024-06-10 18:59:17.694397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:03.044 [2024-06-10 18:59:17.694430] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15c56f0 name Existed_Raid, state offline 00:15:03.044 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:03.044 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:03.044 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.044 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:03.303 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:03.303 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:03.303 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:03.303 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:03.303 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:03.303 18:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:03.593 BaseBdev2 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:03.593 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.852 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.111 [ 00:15:04.111 { 00:15:04.111 "name": "BaseBdev2", 00:15:04.111 "aliases": [ 00:15:04.111 "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c" 00:15:04.111 ], 00:15:04.111 "product_name": "Malloc disk", 00:15:04.111 "block_size": 512, 00:15:04.111 "num_blocks": 65536, 00:15:04.111 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:04.111 "assigned_rate_limits": { 00:15:04.111 "rw_ios_per_sec": 0, 00:15:04.111 "rw_mbytes_per_sec": 0, 00:15:04.111 "r_mbytes_per_sec": 0, 00:15:04.111 "w_mbytes_per_sec": 0 00:15:04.111 }, 00:15:04.111 "claimed": false, 00:15:04.111 "zoned": false, 00:15:04.111 "supported_io_types": { 00:15:04.111 "read": true, 00:15:04.111 "write": true, 00:15:04.111 "unmap": true, 00:15:04.111 "write_zeroes": true, 00:15:04.111 "flush": true, 00:15:04.111 "reset": true, 00:15:04.111 "compare": false, 00:15:04.111 "compare_and_write": false, 00:15:04.111 "abort": true, 00:15:04.111 "nvme_admin": false, 00:15:04.111 "nvme_io": false 00:15:04.111 }, 00:15:04.111 "memory_domains": [ 00:15:04.111 { 00:15:04.111 "dma_device_id": "system", 00:15:04.111 "dma_device_type": 1 00:15:04.111 }, 00:15:04.111 { 00:15:04.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.111 "dma_device_type": 2 00:15:04.111 } 00:15:04.111 ], 00:15:04.111 "driver_specific": {} 00:15:04.112 } 00:15:04.112 ] 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:04.112 BaseBdev3 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:04.112 18:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.370 18:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.629 [ 00:15:04.629 { 00:15:04.629 "name": "BaseBdev3", 00:15:04.629 "aliases": [ 00:15:04.629 "e663d961-4566-4ab2-b86e-e99dd91bac7f" 00:15:04.629 ], 00:15:04.629 "product_name": "Malloc disk", 00:15:04.629 "block_size": 512, 00:15:04.629 "num_blocks": 65536, 00:15:04.629 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:04.629 "assigned_rate_limits": { 00:15:04.629 "rw_ios_per_sec": 0, 00:15:04.629 "rw_mbytes_per_sec": 0, 00:15:04.629 "r_mbytes_per_sec": 0, 00:15:04.629 "w_mbytes_per_sec": 0 00:15:04.629 }, 00:15:04.629 "claimed": false, 00:15:04.629 "zoned": false, 00:15:04.629 "supported_io_types": { 00:15:04.629 "read": true, 00:15:04.629 "write": true, 00:15:04.629 "unmap": true, 00:15:04.629 "write_zeroes": true, 00:15:04.629 "flush": true, 00:15:04.629 "reset": true, 00:15:04.629 "compare": false, 00:15:04.629 "compare_and_write": false, 00:15:04.629 "abort": true, 00:15:04.629 "nvme_admin": false, 00:15:04.629 "nvme_io": false 00:15:04.629 }, 00:15:04.629 "memory_domains": [ 00:15:04.629 { 00:15:04.629 "dma_device_id": "system", 00:15:04.629 "dma_device_type": 1 00:15:04.629 }, 00:15:04.629 { 00:15:04.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.629 "dma_device_type": 2 00:15:04.629 } 00:15:04.629 ], 00:15:04.629 "driver_specific": {} 00:15:04.629 } 00:15:04.629 ] 00:15:04.629 18:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:04.629 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:04.629 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:04.629 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:04.888 [2024-06-10 18:59:19.513516] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.888 [2024-06-10 18:59:19.513552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.888 [2024-06-10 18:59:19.513568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.888 [2024-06-10 18:59:19.514794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.888 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.147 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.147 "name": "Existed_Raid", 00:15:05.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.147 "strip_size_kb": 64, 00:15:05.147 "state": "configuring", 00:15:05.147 "raid_level": "raid0", 00:15:05.147 "superblock": false, 00:15:05.147 "num_base_bdevs": 3, 00:15:05.147 "num_base_bdevs_discovered": 2, 00:15:05.147 "num_base_bdevs_operational": 3, 00:15:05.147 "base_bdevs_list": [ 00:15:05.147 { 00:15:05.147 "name": "BaseBdev1", 00:15:05.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.147 "is_configured": false, 00:15:05.147 "data_offset": 0, 00:15:05.147 "data_size": 0 00:15:05.147 }, 00:15:05.147 { 00:15:05.147 "name": "BaseBdev2", 00:15:05.147 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:05.147 "is_configured": true, 00:15:05.147 "data_offset": 0, 00:15:05.147 "data_size": 65536 00:15:05.147 }, 00:15:05.147 { 00:15:05.147 "name": "BaseBdev3", 00:15:05.147 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:05.147 "is_configured": true, 00:15:05.147 "data_offset": 0, 00:15:05.147 "data_size": 65536 00:15:05.147 } 00:15:05.147 ] 00:15:05.147 }' 00:15:05.147 18:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.147 18:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:05.973 [2024-06-10 18:59:20.536195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.973 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.231 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.231 "name": "Existed_Raid", 00:15:06.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.231 "strip_size_kb": 64, 00:15:06.231 "state": "configuring", 00:15:06.231 "raid_level": "raid0", 00:15:06.231 "superblock": false, 00:15:06.231 "num_base_bdevs": 3, 00:15:06.231 "num_base_bdevs_discovered": 1, 00:15:06.231 "num_base_bdevs_operational": 3, 00:15:06.231 "base_bdevs_list": [ 00:15:06.231 { 00:15:06.231 "name": "BaseBdev1", 00:15:06.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.231 "is_configured": false, 00:15:06.231 "data_offset": 0, 00:15:06.231 "data_size": 0 00:15:06.231 }, 00:15:06.231 { 00:15:06.231 "name": null, 00:15:06.231 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:06.231 "is_configured": false, 00:15:06.231 "data_offset": 0, 00:15:06.231 "data_size": 65536 00:15:06.231 }, 00:15:06.231 { 00:15:06.231 "name": "BaseBdev3", 00:15:06.231 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:06.231 "is_configured": true, 00:15:06.231 "data_offset": 0, 00:15:06.231 "data_size": 65536 00:15:06.231 } 00:15:06.231 ] 00:15:06.231 }' 00:15:06.231 18:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.231 18:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.797 18:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.797 18:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:07.055 [2024-06-10 18:59:21.790698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.055 BaseBdev1 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:07.055 18:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:07.313 18:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:07.573 [ 00:15:07.573 { 00:15:07.573 "name": "BaseBdev1", 00:15:07.573 "aliases": [ 00:15:07.573 "723c40ed-98a2-4e6d-8ca6-770d862ea1b4" 00:15:07.573 ], 00:15:07.573 "product_name": "Malloc disk", 00:15:07.573 "block_size": 512, 00:15:07.573 "num_blocks": 65536, 00:15:07.573 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:07.573 "assigned_rate_limits": { 00:15:07.573 "rw_ios_per_sec": 0, 00:15:07.573 "rw_mbytes_per_sec": 0, 00:15:07.573 "r_mbytes_per_sec": 0, 00:15:07.573 "w_mbytes_per_sec": 0 00:15:07.573 }, 00:15:07.573 "claimed": true, 00:15:07.573 "claim_type": "exclusive_write", 00:15:07.573 "zoned": false, 00:15:07.573 "supported_io_types": { 00:15:07.573 "read": true, 00:15:07.573 "write": true, 00:15:07.573 "unmap": true, 00:15:07.573 "write_zeroes": true, 00:15:07.573 "flush": true, 00:15:07.573 "reset": true, 00:15:07.573 "compare": false, 00:15:07.573 "compare_and_write": false, 00:15:07.573 "abort": true, 00:15:07.573 "nvme_admin": false, 00:15:07.573 "nvme_io": false 00:15:07.573 }, 00:15:07.573 "memory_domains": [ 00:15:07.573 { 00:15:07.573 "dma_device_id": "system", 00:15:07.573 "dma_device_type": 1 00:15:07.573 }, 00:15:07.573 { 00:15:07.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.573 "dma_device_type": 2 00:15:07.573 } 00:15:07.573 ], 00:15:07.573 "driver_specific": {} 00:15:07.573 } 00:15:07.573 ] 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.573 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.831 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.831 "name": "Existed_Raid", 00:15:07.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.831 "strip_size_kb": 64, 00:15:07.831 "state": "configuring", 00:15:07.831 "raid_level": "raid0", 00:15:07.831 "superblock": false, 00:15:07.831 "num_base_bdevs": 3, 00:15:07.831 "num_base_bdevs_discovered": 2, 00:15:07.831 "num_base_bdevs_operational": 3, 00:15:07.831 "base_bdevs_list": [ 00:15:07.831 { 00:15:07.831 "name": "BaseBdev1", 00:15:07.831 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:07.831 "is_configured": true, 00:15:07.831 "data_offset": 0, 00:15:07.831 "data_size": 65536 00:15:07.831 }, 00:15:07.831 { 00:15:07.831 "name": null, 00:15:07.831 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:07.831 "is_configured": false, 00:15:07.831 "data_offset": 0, 00:15:07.831 "data_size": 65536 00:15:07.831 }, 00:15:07.831 { 00:15:07.831 "name": "BaseBdev3", 00:15:07.831 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:07.831 "is_configured": true, 00:15:07.831 "data_offset": 0, 00:15:07.831 "data_size": 65536 00:15:07.831 } 00:15:07.831 ] 00:15:07.831 }' 00:15:07.831 18:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.831 18:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.398 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.398 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:08.657 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:08.657 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:08.916 [2024-06-10 18:59:23.491351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.916 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.175 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.175 "name": "Existed_Raid", 00:15:09.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.175 "strip_size_kb": 64, 00:15:09.175 "state": "configuring", 00:15:09.175 "raid_level": "raid0", 00:15:09.175 "superblock": false, 00:15:09.175 "num_base_bdevs": 3, 00:15:09.175 "num_base_bdevs_discovered": 1, 00:15:09.175 "num_base_bdevs_operational": 3, 00:15:09.175 "base_bdevs_list": [ 00:15:09.175 { 00:15:09.175 "name": "BaseBdev1", 00:15:09.175 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:09.175 "is_configured": true, 00:15:09.175 "data_offset": 0, 00:15:09.175 "data_size": 65536 00:15:09.175 }, 00:15:09.175 { 00:15:09.175 "name": null, 00:15:09.175 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:09.175 "is_configured": false, 00:15:09.175 "data_offset": 0, 00:15:09.175 "data_size": 65536 00:15:09.175 }, 00:15:09.175 { 00:15:09.175 "name": null, 00:15:09.175 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:09.175 "is_configured": false, 00:15:09.175 "data_offset": 0, 00:15:09.175 "data_size": 65536 00:15:09.175 } 00:15:09.175 ] 00:15:09.175 }' 00:15:09.175 18:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.175 18:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.742 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:09.742 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.001 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:10.001 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:10.001 [2024-06-10 18:59:24.754712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.260 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.260 "name": "Existed_Raid", 00:15:10.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.260 "strip_size_kb": 64, 00:15:10.260 "state": "configuring", 00:15:10.260 "raid_level": "raid0", 00:15:10.260 "superblock": false, 00:15:10.260 "num_base_bdevs": 3, 00:15:10.260 "num_base_bdevs_discovered": 2, 00:15:10.260 "num_base_bdevs_operational": 3, 00:15:10.260 "base_bdevs_list": [ 00:15:10.260 { 00:15:10.260 "name": "BaseBdev1", 00:15:10.260 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:10.260 "is_configured": true, 00:15:10.260 "data_offset": 0, 00:15:10.260 "data_size": 65536 00:15:10.260 }, 00:15:10.260 { 00:15:10.260 "name": null, 00:15:10.260 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:10.260 "is_configured": false, 00:15:10.260 "data_offset": 0, 00:15:10.260 "data_size": 65536 00:15:10.260 }, 00:15:10.260 { 00:15:10.260 "name": "BaseBdev3", 00:15:10.260 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:10.261 "is_configured": true, 00:15:10.261 "data_offset": 0, 00:15:10.261 "data_size": 65536 00:15:10.261 } 00:15:10.261 ] 00:15:10.261 }' 00:15:10.261 18:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.261 18:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.827 18:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.827 18:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:11.085 18:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:11.085 18:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:11.354 [2024-06-10 18:59:25.981967] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.354 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:11.354 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.355 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.616 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.616 "name": "Existed_Raid", 00:15:11.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.616 "strip_size_kb": 64, 00:15:11.616 "state": "configuring", 00:15:11.616 "raid_level": "raid0", 00:15:11.616 "superblock": false, 00:15:11.616 "num_base_bdevs": 3, 00:15:11.616 "num_base_bdevs_discovered": 1, 00:15:11.616 "num_base_bdevs_operational": 3, 00:15:11.616 "base_bdevs_list": [ 00:15:11.616 { 00:15:11.616 "name": null, 00:15:11.616 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:11.616 "is_configured": false, 00:15:11.616 "data_offset": 0, 00:15:11.616 "data_size": 65536 00:15:11.616 }, 00:15:11.616 { 00:15:11.616 "name": null, 00:15:11.616 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:11.616 "is_configured": false, 00:15:11.616 "data_offset": 0, 00:15:11.616 "data_size": 65536 00:15:11.616 }, 00:15:11.616 { 00:15:11.616 "name": "BaseBdev3", 00:15:11.616 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:11.616 "is_configured": true, 00:15:11.616 "data_offset": 0, 00:15:11.616 "data_size": 65536 00:15:11.616 } 00:15:11.616 ] 00:15:11.616 }' 00:15:11.616 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.616 18:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.182 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.182 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:12.440 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:12.440 18:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:12.440 [2024-06-10 18:59:27.183287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.699 "name": "Existed_Raid", 00:15:12.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.699 "strip_size_kb": 64, 00:15:12.699 "state": "configuring", 00:15:12.699 "raid_level": "raid0", 00:15:12.699 "superblock": false, 00:15:12.699 "num_base_bdevs": 3, 00:15:12.699 "num_base_bdevs_discovered": 2, 00:15:12.699 "num_base_bdevs_operational": 3, 00:15:12.699 "base_bdevs_list": [ 00:15:12.699 { 00:15:12.699 "name": null, 00:15:12.699 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:12.699 "is_configured": false, 00:15:12.699 "data_offset": 0, 00:15:12.699 "data_size": 65536 00:15:12.699 }, 00:15:12.699 { 00:15:12.699 "name": "BaseBdev2", 00:15:12.699 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:12.699 "is_configured": true, 00:15:12.699 "data_offset": 0, 00:15:12.699 "data_size": 65536 00:15:12.699 }, 00:15:12.699 { 00:15:12.699 "name": "BaseBdev3", 00:15:12.699 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:12.699 "is_configured": true, 00:15:12.699 "data_offset": 0, 00:15:12.699 "data_size": 65536 00:15:12.699 } 00:15:12.699 ] 00:15:12.699 }' 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.699 18:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.266 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.266 18:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.524 18:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:13.524 18:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.524 18:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:13.783 18:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 723c40ed-98a2-4e6d-8ca6-770d862ea1b4 00:15:14.042 [2024-06-10 18:59:28.606326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:14.042 [2024-06-10 18:59:28.606357] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15bbe00 00:15:14.042 [2024-06-10 18:59:28.606365] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:14.042 [2024-06-10 18:59:28.606534] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1778b40 00:15:14.042 [2024-06-10 18:59:28.606649] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15bbe00 00:15:14.042 [2024-06-10 18:59:28.606659] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15bbe00 00:15:14.042 [2024-06-10 18:59:28.606804] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.042 NewBaseBdev 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:14.042 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.301 18:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:14.559 [ 00:15:14.559 { 00:15:14.559 "name": "NewBaseBdev", 00:15:14.559 "aliases": [ 00:15:14.559 "723c40ed-98a2-4e6d-8ca6-770d862ea1b4" 00:15:14.559 ], 00:15:14.559 "product_name": "Malloc disk", 00:15:14.559 "block_size": 512, 00:15:14.559 "num_blocks": 65536, 00:15:14.559 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:14.559 "assigned_rate_limits": { 00:15:14.559 "rw_ios_per_sec": 0, 00:15:14.559 "rw_mbytes_per_sec": 0, 00:15:14.559 "r_mbytes_per_sec": 0, 00:15:14.559 "w_mbytes_per_sec": 0 00:15:14.559 }, 00:15:14.559 "claimed": true, 00:15:14.559 "claim_type": "exclusive_write", 00:15:14.559 "zoned": false, 00:15:14.559 "supported_io_types": { 00:15:14.559 "read": true, 00:15:14.559 "write": true, 00:15:14.559 "unmap": true, 00:15:14.559 "write_zeroes": true, 00:15:14.559 "flush": true, 00:15:14.559 "reset": true, 00:15:14.559 "compare": false, 00:15:14.559 "compare_and_write": false, 00:15:14.559 "abort": true, 00:15:14.559 "nvme_admin": false, 00:15:14.559 "nvme_io": false 00:15:14.559 }, 00:15:14.559 "memory_domains": [ 00:15:14.559 { 00:15:14.559 "dma_device_id": "system", 00:15:14.559 "dma_device_type": 1 00:15:14.559 }, 00:15:14.559 { 00:15:14.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.559 "dma_device_type": 2 00:15:14.559 } 00:15:14.559 ], 00:15:14.559 "driver_specific": {} 00:15:14.559 } 00:15:14.559 ] 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.559 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.559 "name": "Existed_Raid", 00:15:14.559 "uuid": "8fef58c7-16dd-45b5-a655-c6f67c645a03", 00:15:14.559 "strip_size_kb": 64, 00:15:14.559 "state": "online", 00:15:14.559 "raid_level": "raid0", 00:15:14.559 "superblock": false, 00:15:14.559 "num_base_bdevs": 3, 00:15:14.559 "num_base_bdevs_discovered": 3, 00:15:14.559 "num_base_bdevs_operational": 3, 00:15:14.559 "base_bdevs_list": [ 00:15:14.559 { 00:15:14.559 "name": "NewBaseBdev", 00:15:14.559 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:14.559 "is_configured": true, 00:15:14.559 "data_offset": 0, 00:15:14.560 "data_size": 65536 00:15:14.560 }, 00:15:14.560 { 00:15:14.560 "name": "BaseBdev2", 00:15:14.560 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:14.560 "is_configured": true, 00:15:14.560 "data_offset": 0, 00:15:14.560 "data_size": 65536 00:15:14.560 }, 00:15:14.560 { 00:15:14.560 "name": "BaseBdev3", 00:15:14.560 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:14.560 "is_configured": true, 00:15:14.560 "data_offset": 0, 00:15:14.560 "data_size": 65536 00:15:14.560 } 00:15:14.560 ] 00:15:14.560 }' 00:15:14.560 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.560 18:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:15.128 18:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:15.387 [2024-06-10 18:59:30.078464] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.387 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:15.387 "name": "Existed_Raid", 00:15:15.387 "aliases": [ 00:15:15.387 "8fef58c7-16dd-45b5-a655-c6f67c645a03" 00:15:15.387 ], 00:15:15.387 "product_name": "Raid Volume", 00:15:15.387 "block_size": 512, 00:15:15.387 "num_blocks": 196608, 00:15:15.387 "uuid": "8fef58c7-16dd-45b5-a655-c6f67c645a03", 00:15:15.387 "assigned_rate_limits": { 00:15:15.387 "rw_ios_per_sec": 0, 00:15:15.387 "rw_mbytes_per_sec": 0, 00:15:15.387 "r_mbytes_per_sec": 0, 00:15:15.387 "w_mbytes_per_sec": 0 00:15:15.387 }, 00:15:15.387 "claimed": false, 00:15:15.387 "zoned": false, 00:15:15.387 "supported_io_types": { 00:15:15.387 "read": true, 00:15:15.387 "write": true, 00:15:15.387 "unmap": true, 00:15:15.387 "write_zeroes": true, 00:15:15.387 "flush": true, 00:15:15.387 "reset": true, 00:15:15.387 "compare": false, 00:15:15.387 "compare_and_write": false, 00:15:15.387 "abort": false, 00:15:15.387 "nvme_admin": false, 00:15:15.388 "nvme_io": false 00:15:15.388 }, 00:15:15.388 "memory_domains": [ 00:15:15.388 { 00:15:15.388 "dma_device_id": "system", 00:15:15.388 "dma_device_type": 1 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.388 "dma_device_type": 2 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "dma_device_id": "system", 00:15:15.388 "dma_device_type": 1 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.388 "dma_device_type": 2 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "dma_device_id": "system", 00:15:15.388 "dma_device_type": 1 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.388 "dma_device_type": 2 00:15:15.388 } 00:15:15.388 ], 00:15:15.388 "driver_specific": { 00:15:15.388 "raid": { 00:15:15.388 "uuid": "8fef58c7-16dd-45b5-a655-c6f67c645a03", 00:15:15.388 "strip_size_kb": 64, 00:15:15.388 "state": "online", 00:15:15.388 "raid_level": "raid0", 00:15:15.388 "superblock": false, 00:15:15.388 "num_base_bdevs": 3, 00:15:15.388 "num_base_bdevs_discovered": 3, 00:15:15.388 "num_base_bdevs_operational": 3, 00:15:15.388 "base_bdevs_list": [ 00:15:15.388 { 00:15:15.388 "name": "NewBaseBdev", 00:15:15.388 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:15.388 "is_configured": true, 00:15:15.388 "data_offset": 0, 00:15:15.388 "data_size": 65536 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "name": "BaseBdev2", 00:15:15.388 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:15.388 "is_configured": true, 00:15:15.388 "data_offset": 0, 00:15:15.388 "data_size": 65536 00:15:15.388 }, 00:15:15.388 { 00:15:15.388 "name": "BaseBdev3", 00:15:15.388 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:15.388 "is_configured": true, 00:15:15.388 "data_offset": 0, 00:15:15.388 "data_size": 65536 00:15:15.388 } 00:15:15.388 ] 00:15:15.388 } 00:15:15.388 } 00:15:15.388 }' 00:15:15.388 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.646 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:15.646 BaseBdev2 00:15:15.646 BaseBdev3' 00:15:15.646 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:15.646 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:15.646 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.646 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.646 "name": "NewBaseBdev", 00:15:15.646 "aliases": [ 00:15:15.646 "723c40ed-98a2-4e6d-8ca6-770d862ea1b4" 00:15:15.646 ], 00:15:15.646 "product_name": "Malloc disk", 00:15:15.646 "block_size": 512, 00:15:15.646 "num_blocks": 65536, 00:15:15.646 "uuid": "723c40ed-98a2-4e6d-8ca6-770d862ea1b4", 00:15:15.646 "assigned_rate_limits": { 00:15:15.646 "rw_ios_per_sec": 0, 00:15:15.646 "rw_mbytes_per_sec": 0, 00:15:15.646 "r_mbytes_per_sec": 0, 00:15:15.646 "w_mbytes_per_sec": 0 00:15:15.646 }, 00:15:15.646 "claimed": true, 00:15:15.646 "claim_type": "exclusive_write", 00:15:15.646 "zoned": false, 00:15:15.646 "supported_io_types": { 00:15:15.646 "read": true, 00:15:15.646 "write": true, 00:15:15.646 "unmap": true, 00:15:15.646 "write_zeroes": true, 00:15:15.646 "flush": true, 00:15:15.646 "reset": true, 00:15:15.646 "compare": false, 00:15:15.646 "compare_and_write": false, 00:15:15.646 "abort": true, 00:15:15.646 "nvme_admin": false, 00:15:15.646 "nvme_io": false 00:15:15.646 }, 00:15:15.646 "memory_domains": [ 00:15:15.646 { 00:15:15.646 "dma_device_id": "system", 00:15:15.646 "dma_device_type": 1 00:15:15.646 }, 00:15:15.646 { 00:15:15.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.646 "dma_device_type": 2 00:15:15.646 } 00:15:15.646 ], 00:15:15.646 "driver_specific": {} 00:15:15.646 }' 00:15:15.646 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:15.905 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.163 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.163 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.163 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:16.163 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:16.163 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:16.421 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:16.421 "name": "BaseBdev2", 00:15:16.421 "aliases": [ 00:15:16.421 "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c" 00:15:16.421 ], 00:15:16.421 "product_name": "Malloc disk", 00:15:16.421 "block_size": 512, 00:15:16.421 "num_blocks": 65536, 00:15:16.421 "uuid": "51b938a1-5946-46d4-8a72-b2a6a8fa0a7c", 00:15:16.421 "assigned_rate_limits": { 00:15:16.421 "rw_ios_per_sec": 0, 00:15:16.421 "rw_mbytes_per_sec": 0, 00:15:16.421 "r_mbytes_per_sec": 0, 00:15:16.421 "w_mbytes_per_sec": 0 00:15:16.421 }, 00:15:16.421 "claimed": true, 00:15:16.421 "claim_type": "exclusive_write", 00:15:16.421 "zoned": false, 00:15:16.421 "supported_io_types": { 00:15:16.421 "read": true, 00:15:16.421 "write": true, 00:15:16.421 "unmap": true, 00:15:16.421 "write_zeroes": true, 00:15:16.421 "flush": true, 00:15:16.421 "reset": true, 00:15:16.421 "compare": false, 00:15:16.421 "compare_and_write": false, 00:15:16.421 "abort": true, 00:15:16.421 "nvme_admin": false, 00:15:16.421 "nvme_io": false 00:15:16.421 }, 00:15:16.421 "memory_domains": [ 00:15:16.421 { 00:15:16.421 "dma_device_id": "system", 00:15:16.421 "dma_device_type": 1 00:15:16.421 }, 00:15:16.421 { 00:15:16.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.421 "dma_device_type": 2 00:15:16.421 } 00:15:16.421 ], 00:15:16.421 "driver_specific": {} 00:15:16.421 }' 00:15:16.421 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.421 18:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.421 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:16.421 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.421 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.421 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.421 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.421 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:16.680 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:16.938 "name": "BaseBdev3", 00:15:16.938 "aliases": [ 00:15:16.938 "e663d961-4566-4ab2-b86e-e99dd91bac7f" 00:15:16.938 ], 00:15:16.938 "product_name": "Malloc disk", 00:15:16.938 "block_size": 512, 00:15:16.938 "num_blocks": 65536, 00:15:16.938 "uuid": "e663d961-4566-4ab2-b86e-e99dd91bac7f", 00:15:16.938 "assigned_rate_limits": { 00:15:16.938 "rw_ios_per_sec": 0, 00:15:16.938 "rw_mbytes_per_sec": 0, 00:15:16.938 "r_mbytes_per_sec": 0, 00:15:16.938 "w_mbytes_per_sec": 0 00:15:16.938 }, 00:15:16.938 "claimed": true, 00:15:16.938 "claim_type": "exclusive_write", 00:15:16.938 "zoned": false, 00:15:16.938 "supported_io_types": { 00:15:16.938 "read": true, 00:15:16.938 "write": true, 00:15:16.938 "unmap": true, 00:15:16.938 "write_zeroes": true, 00:15:16.938 "flush": true, 00:15:16.938 "reset": true, 00:15:16.938 "compare": false, 00:15:16.938 "compare_and_write": false, 00:15:16.938 "abort": true, 00:15:16.938 "nvme_admin": false, 00:15:16.938 "nvme_io": false 00:15:16.938 }, 00:15:16.938 "memory_domains": [ 00:15:16.938 { 00:15:16.938 "dma_device_id": "system", 00:15:16.938 "dma_device_type": 1 00:15:16.938 }, 00:15:16.938 { 00:15:16.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.938 "dma_device_type": 2 00:15:16.938 } 00:15:16.938 ], 00:15:16.938 "driver_specific": {} 00:15:16.938 }' 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.938 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.197 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.197 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.197 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.197 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.197 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.197 18:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:17.457 [2024-06-10 18:59:32.023376] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.457 [2024-06-10 18:59:32.023399] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.457 [2024-06-10 18:59:32.023442] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.457 [2024-06-10 18:59:32.023485] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.457 [2024-06-10 18:59:32.023496] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15bbe00 name Existed_Raid, state offline 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1642822 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1642822 ']' 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1642822 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1642822 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1642822' 00:15:17.457 killing process with pid 1642822 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1642822 00:15:17.457 [2024-06-10 18:59:32.102794] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.457 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1642822 00:15:17.457 [2024-06-10 18:59:32.125935] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:17.716 00:15:17.716 real 0m26.227s 00:15:17.716 user 0m48.140s 00:15:17.716 sys 0m4.710s 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.716 ************************************ 00:15:17.716 END TEST raid_state_function_test 00:15:17.716 ************************************ 00:15:17.716 18:59:32 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:17.716 18:59:32 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:17.716 18:59:32 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:17.716 18:59:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.716 ************************************ 00:15:17.716 START TEST raid_state_function_test_sb 00:15:17.716 ************************************ 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 3 true 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:17.716 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1647959 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1647959' 00:15:17.717 Process raid pid: 1647959 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1647959 /var/tmp/spdk-raid.sock 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1647959 ']' 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:17.717 18:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.717 [2024-06-10 18:59:32.468766] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:15:17.717 [2024-06-10 18:59:32.468823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.0 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.1 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.2 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.3 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.4 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.5 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.6 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:01.7 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.0 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.1 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.2 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.3 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.4 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.5 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.6 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b6:02.7 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.0 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.1 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.2 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.3 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.4 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.5 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.6 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:01.7 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.0 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.1 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.2 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.3 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.4 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.5 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.6 cannot be used 00:15:17.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.977 EAL: Requested device 0000:b8:02.7 cannot be used 00:15:17.977 [2024-06-10 18:59:32.601976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.977 [2024-06-10 18:59:32.687286] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.236 [2024-06-10 18:59:32.744908] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.236 [2024-06-10 18:59:32.744942] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.805 18:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:18.805 18:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:15:18.805 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:19.064 [2024-06-10 18:59:33.571315] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.064 [2024-06-10 18:59:33.571355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.064 [2024-06-10 18:59:33.571365] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.064 [2024-06-10 18:59:33.571377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.064 [2024-06-10 18:59:33.571385] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.064 [2024-06-10 18:59:33.571395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.064 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.323 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.323 "name": "Existed_Raid", 00:15:19.323 "uuid": "0aca858d-aec9-4074-b6a9-d64d45006b5d", 00:15:19.323 "strip_size_kb": 64, 00:15:19.323 "state": "configuring", 00:15:19.323 "raid_level": "raid0", 00:15:19.323 "superblock": true, 00:15:19.323 "num_base_bdevs": 3, 00:15:19.323 "num_base_bdevs_discovered": 0, 00:15:19.323 "num_base_bdevs_operational": 3, 00:15:19.323 "base_bdevs_list": [ 00:15:19.323 { 00:15:19.323 "name": "BaseBdev1", 00:15:19.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.323 "is_configured": false, 00:15:19.323 "data_offset": 0, 00:15:19.323 "data_size": 0 00:15:19.323 }, 00:15:19.323 { 00:15:19.323 "name": "BaseBdev2", 00:15:19.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.323 "is_configured": false, 00:15:19.323 "data_offset": 0, 00:15:19.323 "data_size": 0 00:15:19.323 }, 00:15:19.323 { 00:15:19.323 "name": "BaseBdev3", 00:15:19.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.323 "is_configured": false, 00:15:19.323 "data_offset": 0, 00:15:19.323 "data_size": 0 00:15:19.323 } 00:15:19.323 ] 00:15:19.323 }' 00:15:19.323 18:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.323 18:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.893 18:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.893 [2024-06-10 18:59:34.533706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.893 [2024-06-10 18:59:34.533736] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208ff30 name Existed_Raid, state configuring 00:15:19.893 18:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:20.152 [2024-06-10 18:59:34.762332] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.152 [2024-06-10 18:59:34.762361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.152 [2024-06-10 18:59:34.762370] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.152 [2024-06-10 18:59:34.762381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.152 [2024-06-10 18:59:34.762389] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.152 [2024-06-10 18:59:34.762399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.152 18:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.411 [2024-06-10 18:59:34.992525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.411 BaseBdev1 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:20.411 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.669 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.927 [ 00:15:20.927 { 00:15:20.927 "name": "BaseBdev1", 00:15:20.927 "aliases": [ 00:15:20.927 "ec24f1e5-73ee-4cb1-bc5a-e238c7940240" 00:15:20.927 ], 00:15:20.927 "product_name": "Malloc disk", 00:15:20.927 "block_size": 512, 00:15:20.927 "num_blocks": 65536, 00:15:20.927 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:20.927 "assigned_rate_limits": { 00:15:20.927 "rw_ios_per_sec": 0, 00:15:20.927 "rw_mbytes_per_sec": 0, 00:15:20.927 "r_mbytes_per_sec": 0, 00:15:20.927 "w_mbytes_per_sec": 0 00:15:20.927 }, 00:15:20.927 "claimed": true, 00:15:20.927 "claim_type": "exclusive_write", 00:15:20.927 "zoned": false, 00:15:20.927 "supported_io_types": { 00:15:20.927 "read": true, 00:15:20.927 "write": true, 00:15:20.927 "unmap": true, 00:15:20.927 "write_zeroes": true, 00:15:20.927 "flush": true, 00:15:20.927 "reset": true, 00:15:20.927 "compare": false, 00:15:20.927 "compare_and_write": false, 00:15:20.927 "abort": true, 00:15:20.927 "nvme_admin": false, 00:15:20.927 "nvme_io": false 00:15:20.927 }, 00:15:20.927 "memory_domains": [ 00:15:20.927 { 00:15:20.927 "dma_device_id": "system", 00:15:20.927 "dma_device_type": 1 00:15:20.927 }, 00:15:20.927 { 00:15:20.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.927 "dma_device_type": 2 00:15:20.927 } 00:15:20.927 ], 00:15:20.927 "driver_specific": {} 00:15:20.927 } 00:15:20.927 ] 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.927 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.186 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.186 "name": "Existed_Raid", 00:15:21.186 "uuid": "fc5dada6-1d67-48f3-a6c7-37baf2ff1155", 00:15:21.186 "strip_size_kb": 64, 00:15:21.186 "state": "configuring", 00:15:21.186 "raid_level": "raid0", 00:15:21.186 "superblock": true, 00:15:21.186 "num_base_bdevs": 3, 00:15:21.186 "num_base_bdevs_discovered": 1, 00:15:21.186 "num_base_bdevs_operational": 3, 00:15:21.186 "base_bdevs_list": [ 00:15:21.186 { 00:15:21.186 "name": "BaseBdev1", 00:15:21.186 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:21.186 "is_configured": true, 00:15:21.186 "data_offset": 2048, 00:15:21.186 "data_size": 63488 00:15:21.186 }, 00:15:21.186 { 00:15:21.186 "name": "BaseBdev2", 00:15:21.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.186 "is_configured": false, 00:15:21.186 "data_offset": 0, 00:15:21.186 "data_size": 0 00:15:21.186 }, 00:15:21.186 { 00:15:21.186 "name": "BaseBdev3", 00:15:21.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.186 "is_configured": false, 00:15:21.186 "data_offset": 0, 00:15:21.186 "data_size": 0 00:15:21.186 } 00:15:21.186 ] 00:15:21.186 }' 00:15:21.186 18:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.186 18:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.755 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.755 [2024-06-10 18:59:36.480593] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.755 [2024-06-10 18:59:36.480634] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208f800 name Existed_Raid, state configuring 00:15:21.755 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:22.014 [2024-06-10 18:59:36.701215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.014 [2024-06-10 18:59:36.702646] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.014 [2024-06-10 18:59:36.702680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.014 [2024-06-10 18:59:36.702689] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.014 [2024-06-10 18:59:36.702700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.014 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.273 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.273 "name": "Existed_Raid", 00:15:22.273 "uuid": "1233c987-2a33-4830-94f7-5e31cca8bcf4", 00:15:22.273 "strip_size_kb": 64, 00:15:22.273 "state": "configuring", 00:15:22.273 "raid_level": "raid0", 00:15:22.273 "superblock": true, 00:15:22.273 "num_base_bdevs": 3, 00:15:22.273 "num_base_bdevs_discovered": 1, 00:15:22.273 "num_base_bdevs_operational": 3, 00:15:22.273 "base_bdevs_list": [ 00:15:22.273 { 00:15:22.273 "name": "BaseBdev1", 00:15:22.273 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:22.273 "is_configured": true, 00:15:22.273 "data_offset": 2048, 00:15:22.273 "data_size": 63488 00:15:22.273 }, 00:15:22.273 { 00:15:22.273 "name": "BaseBdev2", 00:15:22.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.273 "is_configured": false, 00:15:22.273 "data_offset": 0, 00:15:22.273 "data_size": 0 00:15:22.273 }, 00:15:22.273 { 00:15:22.273 "name": "BaseBdev3", 00:15:22.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.273 "is_configured": false, 00:15:22.273 "data_offset": 0, 00:15:22.273 "data_size": 0 00:15:22.273 } 00:15:22.273 ] 00:15:22.273 }' 00:15:22.273 18:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.273 18:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.910 18:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.169 [2024-06-10 18:59:37.727069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.169 BaseBdev2 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:23.169 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.430 18:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.430 [ 00:15:23.430 { 00:15:23.430 "name": "BaseBdev2", 00:15:23.430 "aliases": [ 00:15:23.430 "b8cb0997-f298-4f64-b2e1-38b5af3d950a" 00:15:23.430 ], 00:15:23.430 "product_name": "Malloc disk", 00:15:23.430 "block_size": 512, 00:15:23.430 "num_blocks": 65536, 00:15:23.430 "uuid": "b8cb0997-f298-4f64-b2e1-38b5af3d950a", 00:15:23.430 "assigned_rate_limits": { 00:15:23.430 "rw_ios_per_sec": 0, 00:15:23.430 "rw_mbytes_per_sec": 0, 00:15:23.430 "r_mbytes_per_sec": 0, 00:15:23.430 "w_mbytes_per_sec": 0 00:15:23.430 }, 00:15:23.430 "claimed": true, 00:15:23.430 "claim_type": "exclusive_write", 00:15:23.430 "zoned": false, 00:15:23.430 "supported_io_types": { 00:15:23.430 "read": true, 00:15:23.430 "write": true, 00:15:23.430 "unmap": true, 00:15:23.430 "write_zeroes": true, 00:15:23.430 "flush": true, 00:15:23.430 "reset": true, 00:15:23.430 "compare": false, 00:15:23.430 "compare_and_write": false, 00:15:23.430 "abort": true, 00:15:23.430 "nvme_admin": false, 00:15:23.430 "nvme_io": false 00:15:23.430 }, 00:15:23.430 "memory_domains": [ 00:15:23.430 { 00:15:23.430 "dma_device_id": "system", 00:15:23.430 "dma_device_type": 1 00:15:23.430 }, 00:15:23.430 { 00:15:23.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.430 "dma_device_type": 2 00:15:23.430 } 00:15:23.430 ], 00:15:23.430 "driver_specific": {} 00:15:23.430 } 00:15:23.430 ] 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.689 "name": "Existed_Raid", 00:15:23.689 "uuid": "1233c987-2a33-4830-94f7-5e31cca8bcf4", 00:15:23.689 "strip_size_kb": 64, 00:15:23.689 "state": "configuring", 00:15:23.689 "raid_level": "raid0", 00:15:23.689 "superblock": true, 00:15:23.689 "num_base_bdevs": 3, 00:15:23.689 "num_base_bdevs_discovered": 2, 00:15:23.689 "num_base_bdevs_operational": 3, 00:15:23.689 "base_bdevs_list": [ 00:15:23.689 { 00:15:23.689 "name": "BaseBdev1", 00:15:23.689 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:23.689 "is_configured": true, 00:15:23.689 "data_offset": 2048, 00:15:23.689 "data_size": 63488 00:15:23.689 }, 00:15:23.689 { 00:15:23.689 "name": "BaseBdev2", 00:15:23.689 "uuid": "b8cb0997-f298-4f64-b2e1-38b5af3d950a", 00:15:23.689 "is_configured": true, 00:15:23.689 "data_offset": 2048, 00:15:23.689 "data_size": 63488 00:15:23.689 }, 00:15:23.689 { 00:15:23.689 "name": "BaseBdev3", 00:15:23.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.689 "is_configured": false, 00:15:23.689 "data_offset": 0, 00:15:23.689 "data_size": 0 00:15:23.689 } 00:15:23.689 ] 00:15:23.689 }' 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.689 18:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.257 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.516 [2024-06-10 18:59:39.214274] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.516 [2024-06-10 18:59:39.214419] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20906f0 00:15:24.516 [2024-06-10 18:59:39.214432] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:24.516 [2024-06-10 18:59:39.214610] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20903c0 00:15:24.516 [2024-06-10 18:59:39.214718] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20906f0 00:15:24.516 [2024-06-10 18:59:39.214728] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20906f0 00:15:24.516 [2024-06-10 18:59:39.214814] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.516 BaseBdev3 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:24.516 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.776 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.035 [ 00:15:25.035 { 00:15:25.035 "name": "BaseBdev3", 00:15:25.035 "aliases": [ 00:15:25.035 "ea8108d8-8867-4a31-ad51-7cc1ded0f03e" 00:15:25.035 ], 00:15:25.035 "product_name": "Malloc disk", 00:15:25.035 "block_size": 512, 00:15:25.035 "num_blocks": 65536, 00:15:25.035 "uuid": "ea8108d8-8867-4a31-ad51-7cc1ded0f03e", 00:15:25.035 "assigned_rate_limits": { 00:15:25.035 "rw_ios_per_sec": 0, 00:15:25.035 "rw_mbytes_per_sec": 0, 00:15:25.035 "r_mbytes_per_sec": 0, 00:15:25.035 "w_mbytes_per_sec": 0 00:15:25.035 }, 00:15:25.035 "claimed": true, 00:15:25.035 "claim_type": "exclusive_write", 00:15:25.035 "zoned": false, 00:15:25.035 "supported_io_types": { 00:15:25.035 "read": true, 00:15:25.035 "write": true, 00:15:25.035 "unmap": true, 00:15:25.035 "write_zeroes": true, 00:15:25.035 "flush": true, 00:15:25.035 "reset": true, 00:15:25.035 "compare": false, 00:15:25.035 "compare_and_write": false, 00:15:25.035 "abort": true, 00:15:25.035 "nvme_admin": false, 00:15:25.035 "nvme_io": false 00:15:25.035 }, 00:15:25.035 "memory_domains": [ 00:15:25.035 { 00:15:25.035 "dma_device_id": "system", 00:15:25.035 "dma_device_type": 1 00:15:25.035 }, 00:15:25.035 { 00:15:25.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.035 "dma_device_type": 2 00:15:25.035 } 00:15:25.035 ], 00:15:25.035 "driver_specific": {} 00:15:25.035 } 00:15:25.035 ] 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.035 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.294 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.294 "name": "Existed_Raid", 00:15:25.294 "uuid": "1233c987-2a33-4830-94f7-5e31cca8bcf4", 00:15:25.294 "strip_size_kb": 64, 00:15:25.294 "state": "online", 00:15:25.294 "raid_level": "raid0", 00:15:25.294 "superblock": true, 00:15:25.294 "num_base_bdevs": 3, 00:15:25.294 "num_base_bdevs_discovered": 3, 00:15:25.294 "num_base_bdevs_operational": 3, 00:15:25.294 "base_bdevs_list": [ 00:15:25.294 { 00:15:25.294 "name": "BaseBdev1", 00:15:25.294 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:25.294 "is_configured": true, 00:15:25.294 "data_offset": 2048, 00:15:25.294 "data_size": 63488 00:15:25.294 }, 00:15:25.294 { 00:15:25.294 "name": "BaseBdev2", 00:15:25.294 "uuid": "b8cb0997-f298-4f64-b2e1-38b5af3d950a", 00:15:25.294 "is_configured": true, 00:15:25.294 "data_offset": 2048, 00:15:25.294 "data_size": 63488 00:15:25.294 }, 00:15:25.294 { 00:15:25.294 "name": "BaseBdev3", 00:15:25.294 "uuid": "ea8108d8-8867-4a31-ad51-7cc1ded0f03e", 00:15:25.294 "is_configured": true, 00:15:25.294 "data_offset": 2048, 00:15:25.294 "data_size": 63488 00:15:25.294 } 00:15:25.294 ] 00:15:25.294 }' 00:15:25.294 18:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.294 18:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:25.862 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:26.121 [2024-06-10 18:59:40.718484] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.121 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:26.121 "name": "Existed_Raid", 00:15:26.121 "aliases": [ 00:15:26.121 "1233c987-2a33-4830-94f7-5e31cca8bcf4" 00:15:26.121 ], 00:15:26.121 "product_name": "Raid Volume", 00:15:26.121 "block_size": 512, 00:15:26.121 "num_blocks": 190464, 00:15:26.121 "uuid": "1233c987-2a33-4830-94f7-5e31cca8bcf4", 00:15:26.121 "assigned_rate_limits": { 00:15:26.121 "rw_ios_per_sec": 0, 00:15:26.121 "rw_mbytes_per_sec": 0, 00:15:26.121 "r_mbytes_per_sec": 0, 00:15:26.121 "w_mbytes_per_sec": 0 00:15:26.121 }, 00:15:26.121 "claimed": false, 00:15:26.121 "zoned": false, 00:15:26.121 "supported_io_types": { 00:15:26.121 "read": true, 00:15:26.121 "write": true, 00:15:26.121 "unmap": true, 00:15:26.121 "write_zeroes": true, 00:15:26.121 "flush": true, 00:15:26.121 "reset": true, 00:15:26.121 "compare": false, 00:15:26.121 "compare_and_write": false, 00:15:26.121 "abort": false, 00:15:26.121 "nvme_admin": false, 00:15:26.121 "nvme_io": false 00:15:26.121 }, 00:15:26.121 "memory_domains": [ 00:15:26.121 { 00:15:26.121 "dma_device_id": "system", 00:15:26.121 "dma_device_type": 1 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.121 "dma_device_type": 2 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "dma_device_id": "system", 00:15:26.121 "dma_device_type": 1 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.121 "dma_device_type": 2 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "dma_device_id": "system", 00:15:26.121 "dma_device_type": 1 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.121 "dma_device_type": 2 00:15:26.121 } 00:15:26.121 ], 00:15:26.121 "driver_specific": { 00:15:26.121 "raid": { 00:15:26.121 "uuid": "1233c987-2a33-4830-94f7-5e31cca8bcf4", 00:15:26.121 "strip_size_kb": 64, 00:15:26.121 "state": "online", 00:15:26.121 "raid_level": "raid0", 00:15:26.121 "superblock": true, 00:15:26.121 "num_base_bdevs": 3, 00:15:26.121 "num_base_bdevs_discovered": 3, 00:15:26.121 "num_base_bdevs_operational": 3, 00:15:26.121 "base_bdevs_list": [ 00:15:26.121 { 00:15:26.121 "name": "BaseBdev1", 00:15:26.121 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:26.121 "is_configured": true, 00:15:26.121 "data_offset": 2048, 00:15:26.121 "data_size": 63488 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "name": "BaseBdev2", 00:15:26.121 "uuid": "b8cb0997-f298-4f64-b2e1-38b5af3d950a", 00:15:26.121 "is_configured": true, 00:15:26.121 "data_offset": 2048, 00:15:26.121 "data_size": 63488 00:15:26.121 }, 00:15:26.121 { 00:15:26.121 "name": "BaseBdev3", 00:15:26.121 "uuid": "ea8108d8-8867-4a31-ad51-7cc1ded0f03e", 00:15:26.121 "is_configured": true, 00:15:26.121 "data_offset": 2048, 00:15:26.121 "data_size": 63488 00:15:26.121 } 00:15:26.121 ] 00:15:26.121 } 00:15:26.121 } 00:15:26.121 }' 00:15:26.121 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.121 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:26.121 BaseBdev2 00:15:26.121 BaseBdev3' 00:15:26.121 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.121 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:26.121 18:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.380 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.380 "name": "BaseBdev1", 00:15:26.380 "aliases": [ 00:15:26.380 "ec24f1e5-73ee-4cb1-bc5a-e238c7940240" 00:15:26.380 ], 00:15:26.380 "product_name": "Malloc disk", 00:15:26.380 "block_size": 512, 00:15:26.380 "num_blocks": 65536, 00:15:26.380 "uuid": "ec24f1e5-73ee-4cb1-bc5a-e238c7940240", 00:15:26.380 "assigned_rate_limits": { 00:15:26.380 "rw_ios_per_sec": 0, 00:15:26.380 "rw_mbytes_per_sec": 0, 00:15:26.380 "r_mbytes_per_sec": 0, 00:15:26.380 "w_mbytes_per_sec": 0 00:15:26.380 }, 00:15:26.380 "claimed": true, 00:15:26.380 "claim_type": "exclusive_write", 00:15:26.380 "zoned": false, 00:15:26.380 "supported_io_types": { 00:15:26.380 "read": true, 00:15:26.380 "write": true, 00:15:26.380 "unmap": true, 00:15:26.380 "write_zeroes": true, 00:15:26.380 "flush": true, 00:15:26.380 "reset": true, 00:15:26.380 "compare": false, 00:15:26.380 "compare_and_write": false, 00:15:26.380 "abort": true, 00:15:26.380 "nvme_admin": false, 00:15:26.380 "nvme_io": false 00:15:26.380 }, 00:15:26.380 "memory_domains": [ 00:15:26.380 { 00:15:26.380 "dma_device_id": "system", 00:15:26.380 "dma_device_type": 1 00:15:26.380 }, 00:15:26.380 { 00:15:26.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.380 "dma_device_type": 2 00:15:26.380 } 00:15:26.380 ], 00:15:26.380 "driver_specific": {} 00:15:26.380 }' 00:15:26.380 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.380 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.380 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.380 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:26.639 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.898 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.898 "name": "BaseBdev2", 00:15:26.898 "aliases": [ 00:15:26.898 "b8cb0997-f298-4f64-b2e1-38b5af3d950a" 00:15:26.898 ], 00:15:26.898 "product_name": "Malloc disk", 00:15:26.898 "block_size": 512, 00:15:26.898 "num_blocks": 65536, 00:15:26.898 "uuid": "b8cb0997-f298-4f64-b2e1-38b5af3d950a", 00:15:26.898 "assigned_rate_limits": { 00:15:26.898 "rw_ios_per_sec": 0, 00:15:26.898 "rw_mbytes_per_sec": 0, 00:15:26.898 "r_mbytes_per_sec": 0, 00:15:26.898 "w_mbytes_per_sec": 0 00:15:26.898 }, 00:15:26.898 "claimed": true, 00:15:26.898 "claim_type": "exclusive_write", 00:15:26.898 "zoned": false, 00:15:26.898 "supported_io_types": { 00:15:26.898 "read": true, 00:15:26.898 "write": true, 00:15:26.898 "unmap": true, 00:15:26.898 "write_zeroes": true, 00:15:26.898 "flush": true, 00:15:26.898 "reset": true, 00:15:26.898 "compare": false, 00:15:26.898 "compare_and_write": false, 00:15:26.898 "abort": true, 00:15:26.898 "nvme_admin": false, 00:15:26.898 "nvme_io": false 00:15:26.898 }, 00:15:26.898 "memory_domains": [ 00:15:26.898 { 00:15:26.898 "dma_device_id": "system", 00:15:26.898 "dma_device_type": 1 00:15:26.898 }, 00:15:26.898 { 00:15:26.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.898 "dma_device_type": 2 00:15:26.898 } 00:15:26.898 ], 00:15:26.898 "driver_specific": {} 00:15:26.898 }' 00:15:26.898 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.898 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.898 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.898 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:27.157 18:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.416 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.416 "name": "BaseBdev3", 00:15:27.416 "aliases": [ 00:15:27.416 "ea8108d8-8867-4a31-ad51-7cc1ded0f03e" 00:15:27.416 ], 00:15:27.416 "product_name": "Malloc disk", 00:15:27.416 "block_size": 512, 00:15:27.416 "num_blocks": 65536, 00:15:27.416 "uuid": "ea8108d8-8867-4a31-ad51-7cc1ded0f03e", 00:15:27.416 "assigned_rate_limits": { 00:15:27.416 "rw_ios_per_sec": 0, 00:15:27.416 "rw_mbytes_per_sec": 0, 00:15:27.416 "r_mbytes_per_sec": 0, 00:15:27.416 "w_mbytes_per_sec": 0 00:15:27.416 }, 00:15:27.416 "claimed": true, 00:15:27.416 "claim_type": "exclusive_write", 00:15:27.416 "zoned": false, 00:15:27.416 "supported_io_types": { 00:15:27.416 "read": true, 00:15:27.416 "write": true, 00:15:27.416 "unmap": true, 00:15:27.416 "write_zeroes": true, 00:15:27.416 "flush": true, 00:15:27.416 "reset": true, 00:15:27.416 "compare": false, 00:15:27.416 "compare_and_write": false, 00:15:27.416 "abort": true, 00:15:27.416 "nvme_admin": false, 00:15:27.416 "nvme_io": false 00:15:27.416 }, 00:15:27.416 "memory_domains": [ 00:15:27.416 { 00:15:27.416 "dma_device_id": "system", 00:15:27.416 "dma_device_type": 1 00:15:27.416 }, 00:15:27.416 { 00:15:27.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.416 "dma_device_type": 2 00:15:27.416 } 00:15:27.416 ], 00:15:27.416 "driver_specific": {} 00:15:27.416 }' 00:15:27.416 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.416 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.675 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:27.935 [2024-06-10 18:59:42.651352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.935 [2024-06-10 18:59:42.651378] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.935 [2024-06-10 18:59:42.651416] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.935 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.194 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.194 "name": "Existed_Raid", 00:15:28.194 "uuid": "1233c987-2a33-4830-94f7-5e31cca8bcf4", 00:15:28.194 "strip_size_kb": 64, 00:15:28.194 "state": "offline", 00:15:28.194 "raid_level": "raid0", 00:15:28.194 "superblock": true, 00:15:28.194 "num_base_bdevs": 3, 00:15:28.194 "num_base_bdevs_discovered": 2, 00:15:28.194 "num_base_bdevs_operational": 2, 00:15:28.194 "base_bdevs_list": [ 00:15:28.194 { 00:15:28.194 "name": null, 00:15:28.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.194 "is_configured": false, 00:15:28.194 "data_offset": 2048, 00:15:28.194 "data_size": 63488 00:15:28.194 }, 00:15:28.194 { 00:15:28.194 "name": "BaseBdev2", 00:15:28.194 "uuid": "b8cb0997-f298-4f64-b2e1-38b5af3d950a", 00:15:28.194 "is_configured": true, 00:15:28.194 "data_offset": 2048, 00:15:28.194 "data_size": 63488 00:15:28.194 }, 00:15:28.194 { 00:15:28.194 "name": "BaseBdev3", 00:15:28.194 "uuid": "ea8108d8-8867-4a31-ad51-7cc1ded0f03e", 00:15:28.194 "is_configured": true, 00:15:28.194 "data_offset": 2048, 00:15:28.194 "data_size": 63488 00:15:28.194 } 00:15:28.194 ] 00:15:28.194 }' 00:15:28.194 18:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.194 18:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.761 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:28.761 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:28.761 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.761 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:29.019 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:29.019 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.019 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:29.279 [2024-06-10 18:59:43.867628] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.279 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:29.279 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:29.279 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.279 18:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:29.538 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:29.538 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.538 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:29.797 [2024-06-10 18:59:44.322912] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.797 [2024-06-10 18:59:44.322951] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20906f0 name Existed_Raid, state offline 00:15:29.797 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:29.797 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:29.797 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.797 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.056 BaseBdev2 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:30.056 18:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.314 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.573 [ 00:15:30.573 { 00:15:30.573 "name": "BaseBdev2", 00:15:30.573 "aliases": [ 00:15:30.573 "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91" 00:15:30.573 ], 00:15:30.573 "product_name": "Malloc disk", 00:15:30.573 "block_size": 512, 00:15:30.573 "num_blocks": 65536, 00:15:30.573 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:30.573 "assigned_rate_limits": { 00:15:30.573 "rw_ios_per_sec": 0, 00:15:30.573 "rw_mbytes_per_sec": 0, 00:15:30.573 "r_mbytes_per_sec": 0, 00:15:30.573 "w_mbytes_per_sec": 0 00:15:30.573 }, 00:15:30.573 "claimed": false, 00:15:30.573 "zoned": false, 00:15:30.573 "supported_io_types": { 00:15:30.573 "read": true, 00:15:30.573 "write": true, 00:15:30.573 "unmap": true, 00:15:30.573 "write_zeroes": true, 00:15:30.573 "flush": true, 00:15:30.573 "reset": true, 00:15:30.573 "compare": false, 00:15:30.573 "compare_and_write": false, 00:15:30.573 "abort": true, 00:15:30.573 "nvme_admin": false, 00:15:30.573 "nvme_io": false 00:15:30.573 }, 00:15:30.573 "memory_domains": [ 00:15:30.573 { 00:15:30.573 "dma_device_id": "system", 00:15:30.573 "dma_device_type": 1 00:15:30.573 }, 00:15:30.573 { 00:15:30.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.573 "dma_device_type": 2 00:15:30.573 } 00:15:30.573 ], 00:15:30.573 "driver_specific": {} 00:15:30.573 } 00:15:30.573 ] 00:15:30.573 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:30.573 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:30.573 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:30.573 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:30.831 BaseBdev3 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:30.831 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.090 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.355 [ 00:15:31.355 { 00:15:31.355 "name": "BaseBdev3", 00:15:31.355 "aliases": [ 00:15:31.355 "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578" 00:15:31.355 ], 00:15:31.355 "product_name": "Malloc disk", 00:15:31.355 "block_size": 512, 00:15:31.355 "num_blocks": 65536, 00:15:31.355 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:31.355 "assigned_rate_limits": { 00:15:31.355 "rw_ios_per_sec": 0, 00:15:31.355 "rw_mbytes_per_sec": 0, 00:15:31.355 "r_mbytes_per_sec": 0, 00:15:31.355 "w_mbytes_per_sec": 0 00:15:31.355 }, 00:15:31.355 "claimed": false, 00:15:31.355 "zoned": false, 00:15:31.355 "supported_io_types": { 00:15:31.355 "read": true, 00:15:31.355 "write": true, 00:15:31.355 "unmap": true, 00:15:31.355 "write_zeroes": true, 00:15:31.355 "flush": true, 00:15:31.355 "reset": true, 00:15:31.355 "compare": false, 00:15:31.355 "compare_and_write": false, 00:15:31.355 "abort": true, 00:15:31.355 "nvme_admin": false, 00:15:31.355 "nvme_io": false 00:15:31.355 }, 00:15:31.355 "memory_domains": [ 00:15:31.355 { 00:15:31.355 "dma_device_id": "system", 00:15:31.355 "dma_device_type": 1 00:15:31.355 }, 00:15:31.355 { 00:15:31.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.355 "dma_device_type": 2 00:15:31.355 } 00:15:31.355 ], 00:15:31.355 "driver_specific": {} 00:15:31.355 } 00:15:31.355 ] 00:15:31.355 18:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:31.355 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:31.355 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:31.355 18:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:31.615 [2024-06-10 18:59:46.126106] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.615 [2024-06-10 18:59:46.126146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.615 [2024-06-10 18:59:46.126164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.615 [2024-06-10 18:59:46.127396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.615 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.874 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.874 "name": "Existed_Raid", 00:15:31.874 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:31.874 "strip_size_kb": 64, 00:15:31.874 "state": "configuring", 00:15:31.874 "raid_level": "raid0", 00:15:31.874 "superblock": true, 00:15:31.874 "num_base_bdevs": 3, 00:15:31.874 "num_base_bdevs_discovered": 2, 00:15:31.874 "num_base_bdevs_operational": 3, 00:15:31.874 "base_bdevs_list": [ 00:15:31.874 { 00:15:31.874 "name": "BaseBdev1", 00:15:31.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.874 "is_configured": false, 00:15:31.874 "data_offset": 0, 00:15:31.874 "data_size": 0 00:15:31.874 }, 00:15:31.874 { 00:15:31.874 "name": "BaseBdev2", 00:15:31.874 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:31.874 "is_configured": true, 00:15:31.874 "data_offset": 2048, 00:15:31.874 "data_size": 63488 00:15:31.874 }, 00:15:31.874 { 00:15:31.874 "name": "BaseBdev3", 00:15:31.874 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:31.874 "is_configured": true, 00:15:31.874 "data_offset": 2048, 00:15:31.874 "data_size": 63488 00:15:31.874 } 00:15:31.874 ] 00:15:31.874 }' 00:15:31.874 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.874 18:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.442 18:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:32.442 [2024-06-10 18:59:47.140736] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.442 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.700 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.700 "name": "Existed_Raid", 00:15:32.700 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:32.700 "strip_size_kb": 64, 00:15:32.700 "state": "configuring", 00:15:32.700 "raid_level": "raid0", 00:15:32.700 "superblock": true, 00:15:32.700 "num_base_bdevs": 3, 00:15:32.700 "num_base_bdevs_discovered": 1, 00:15:32.700 "num_base_bdevs_operational": 3, 00:15:32.700 "base_bdevs_list": [ 00:15:32.700 { 00:15:32.700 "name": "BaseBdev1", 00:15:32.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.700 "is_configured": false, 00:15:32.700 "data_offset": 0, 00:15:32.700 "data_size": 0 00:15:32.700 }, 00:15:32.700 { 00:15:32.700 "name": null, 00:15:32.700 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:32.700 "is_configured": false, 00:15:32.700 "data_offset": 2048, 00:15:32.700 "data_size": 63488 00:15:32.700 }, 00:15:32.700 { 00:15:32.700 "name": "BaseBdev3", 00:15:32.700 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:32.700 "is_configured": true, 00:15:32.700 "data_offset": 2048, 00:15:32.700 "data_size": 63488 00:15:32.700 } 00:15:32.700 ] 00:15:32.700 }' 00:15:32.700 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.700 18:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.268 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.268 18:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.527 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:33.527 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.785 [2024-06-10 18:59:48.395294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.785 BaseBdev1 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:33.785 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.044 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.303 [ 00:15:34.303 { 00:15:34.303 "name": "BaseBdev1", 00:15:34.303 "aliases": [ 00:15:34.303 "88e6d15f-c8c3-44b1-b7f6-c24351f97208" 00:15:34.303 ], 00:15:34.303 "product_name": "Malloc disk", 00:15:34.303 "block_size": 512, 00:15:34.303 "num_blocks": 65536, 00:15:34.303 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:34.303 "assigned_rate_limits": { 00:15:34.303 "rw_ios_per_sec": 0, 00:15:34.303 "rw_mbytes_per_sec": 0, 00:15:34.303 "r_mbytes_per_sec": 0, 00:15:34.303 "w_mbytes_per_sec": 0 00:15:34.303 }, 00:15:34.303 "claimed": true, 00:15:34.303 "claim_type": "exclusive_write", 00:15:34.303 "zoned": false, 00:15:34.303 "supported_io_types": { 00:15:34.303 "read": true, 00:15:34.303 "write": true, 00:15:34.303 "unmap": true, 00:15:34.303 "write_zeroes": true, 00:15:34.303 "flush": true, 00:15:34.303 "reset": true, 00:15:34.303 "compare": false, 00:15:34.303 "compare_and_write": false, 00:15:34.303 "abort": true, 00:15:34.303 "nvme_admin": false, 00:15:34.303 "nvme_io": false 00:15:34.303 }, 00:15:34.303 "memory_domains": [ 00:15:34.303 { 00:15:34.303 "dma_device_id": "system", 00:15:34.303 "dma_device_type": 1 00:15:34.303 }, 00:15:34.303 { 00:15:34.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.303 "dma_device_type": 2 00:15:34.303 } 00:15:34.303 ], 00:15:34.303 "driver_specific": {} 00:15:34.303 } 00:15:34.303 ] 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.303 18:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.562 18:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.562 "name": "Existed_Raid", 00:15:34.562 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:34.562 "strip_size_kb": 64, 00:15:34.562 "state": "configuring", 00:15:34.562 "raid_level": "raid0", 00:15:34.562 "superblock": true, 00:15:34.562 "num_base_bdevs": 3, 00:15:34.562 "num_base_bdevs_discovered": 2, 00:15:34.562 "num_base_bdevs_operational": 3, 00:15:34.562 "base_bdevs_list": [ 00:15:34.562 { 00:15:34.562 "name": "BaseBdev1", 00:15:34.562 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:34.562 "is_configured": true, 00:15:34.562 "data_offset": 2048, 00:15:34.562 "data_size": 63488 00:15:34.562 }, 00:15:34.562 { 00:15:34.562 "name": null, 00:15:34.562 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:34.562 "is_configured": false, 00:15:34.562 "data_offset": 2048, 00:15:34.562 "data_size": 63488 00:15:34.562 }, 00:15:34.562 { 00:15:34.562 "name": "BaseBdev3", 00:15:34.562 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:34.562 "is_configured": true, 00:15:34.562 "data_offset": 2048, 00:15:34.562 "data_size": 63488 00:15:34.562 } 00:15:34.562 ] 00:15:34.562 }' 00:15:34.562 18:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.562 18:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.129 18:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.129 18:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:35.129 18:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:35.129 18:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:35.388 [2024-06-10 18:59:50.072164] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.388 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.647 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.647 "name": "Existed_Raid", 00:15:35.647 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:35.647 "strip_size_kb": 64, 00:15:35.647 "state": "configuring", 00:15:35.647 "raid_level": "raid0", 00:15:35.647 "superblock": true, 00:15:35.647 "num_base_bdevs": 3, 00:15:35.647 "num_base_bdevs_discovered": 1, 00:15:35.647 "num_base_bdevs_operational": 3, 00:15:35.647 "base_bdevs_list": [ 00:15:35.647 { 00:15:35.647 "name": "BaseBdev1", 00:15:35.647 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:35.647 "is_configured": true, 00:15:35.647 "data_offset": 2048, 00:15:35.647 "data_size": 63488 00:15:35.647 }, 00:15:35.647 { 00:15:35.647 "name": null, 00:15:35.647 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:35.647 "is_configured": false, 00:15:35.647 "data_offset": 2048, 00:15:35.647 "data_size": 63488 00:15:35.647 }, 00:15:35.647 { 00:15:35.647 "name": null, 00:15:35.647 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:35.647 "is_configured": false, 00:15:35.647 "data_offset": 2048, 00:15:35.647 "data_size": 63488 00:15:35.647 } 00:15:35.647 ] 00:15:35.647 }' 00:15:35.647 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.647 18:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.214 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.214 18:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:36.474 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:36.474 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:36.733 [2024-06-10 18:59:51.331291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.733 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.992 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.992 "name": "Existed_Raid", 00:15:36.992 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:36.992 "strip_size_kb": 64, 00:15:36.992 "state": "configuring", 00:15:36.992 "raid_level": "raid0", 00:15:36.992 "superblock": true, 00:15:36.992 "num_base_bdevs": 3, 00:15:36.992 "num_base_bdevs_discovered": 2, 00:15:36.992 "num_base_bdevs_operational": 3, 00:15:36.992 "base_bdevs_list": [ 00:15:36.992 { 00:15:36.992 "name": "BaseBdev1", 00:15:36.992 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:36.992 "is_configured": true, 00:15:36.992 "data_offset": 2048, 00:15:36.992 "data_size": 63488 00:15:36.992 }, 00:15:36.992 { 00:15:36.992 "name": null, 00:15:36.992 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:36.992 "is_configured": false, 00:15:36.992 "data_offset": 2048, 00:15:36.992 "data_size": 63488 00:15:36.992 }, 00:15:36.993 { 00:15:36.993 "name": "BaseBdev3", 00:15:36.993 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:36.993 "is_configured": true, 00:15:36.993 "data_offset": 2048, 00:15:36.993 "data_size": 63488 00:15:36.993 } 00:15:36.993 ] 00:15:36.993 }' 00:15:36.993 18:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.993 18:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.560 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.560 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.819 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:37.819 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:38.078 [2024-06-10 18:59:52.586620] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.078 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.346 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.346 "name": "Existed_Raid", 00:15:38.346 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:38.346 "strip_size_kb": 64, 00:15:38.346 "state": "configuring", 00:15:38.346 "raid_level": "raid0", 00:15:38.346 "superblock": true, 00:15:38.346 "num_base_bdevs": 3, 00:15:38.346 "num_base_bdevs_discovered": 1, 00:15:38.346 "num_base_bdevs_operational": 3, 00:15:38.346 "base_bdevs_list": [ 00:15:38.346 { 00:15:38.346 "name": null, 00:15:38.346 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:38.346 "is_configured": false, 00:15:38.346 "data_offset": 2048, 00:15:38.346 "data_size": 63488 00:15:38.346 }, 00:15:38.346 { 00:15:38.346 "name": null, 00:15:38.346 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:38.346 "is_configured": false, 00:15:38.346 "data_offset": 2048, 00:15:38.346 "data_size": 63488 00:15:38.346 }, 00:15:38.346 { 00:15:38.346 "name": "BaseBdev3", 00:15:38.346 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:38.346 "is_configured": true, 00:15:38.346 "data_offset": 2048, 00:15:38.346 "data_size": 63488 00:15:38.346 } 00:15:38.346 ] 00:15:38.346 }' 00:15:38.346 18:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.346 18:59:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.912 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.912 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.912 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:38.912 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:39.171 [2024-06-10 18:59:53.859974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.171 18:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.429 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.429 "name": "Existed_Raid", 00:15:39.429 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:39.429 "strip_size_kb": 64, 00:15:39.429 "state": "configuring", 00:15:39.429 "raid_level": "raid0", 00:15:39.429 "superblock": true, 00:15:39.429 "num_base_bdevs": 3, 00:15:39.429 "num_base_bdevs_discovered": 2, 00:15:39.429 "num_base_bdevs_operational": 3, 00:15:39.429 "base_bdevs_list": [ 00:15:39.429 { 00:15:39.429 "name": null, 00:15:39.429 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:39.429 "is_configured": false, 00:15:39.429 "data_offset": 2048, 00:15:39.429 "data_size": 63488 00:15:39.429 }, 00:15:39.429 { 00:15:39.429 "name": "BaseBdev2", 00:15:39.429 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:39.429 "is_configured": true, 00:15:39.429 "data_offset": 2048, 00:15:39.429 "data_size": 63488 00:15:39.429 }, 00:15:39.429 { 00:15:39.429 "name": "BaseBdev3", 00:15:39.429 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:39.429 "is_configured": true, 00:15:39.429 "data_offset": 2048, 00:15:39.429 "data_size": 63488 00:15:39.429 } 00:15:39.429 ] 00:15:39.429 }' 00:15:39.429 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.429 18:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.996 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.996 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.255 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:40.255 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.255 18:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:40.514 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 88e6d15f-c8c3-44b1-b7f6-c24351f97208 00:15:40.773 [2024-06-10 18:59:55.343115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:40.773 [2024-06-10 18:59:55.343251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2090fe0 00:15:40.773 [2024-06-10 18:59:55.343263] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:40.773 [2024-06-10 18:59:55.343422] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2243b40 00:15:40.773 [2024-06-10 18:59:55.343525] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2090fe0 00:15:40.773 [2024-06-10 18:59:55.343534] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2090fe0 00:15:40.773 [2024-06-10 18:59:55.343631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.773 NewBaseBdev 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:40.773 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.031 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.295 [ 00:15:41.295 { 00:15:41.295 "name": "NewBaseBdev", 00:15:41.295 "aliases": [ 00:15:41.295 "88e6d15f-c8c3-44b1-b7f6-c24351f97208" 00:15:41.295 ], 00:15:41.295 "product_name": "Malloc disk", 00:15:41.295 "block_size": 512, 00:15:41.295 "num_blocks": 65536, 00:15:41.295 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:41.295 "assigned_rate_limits": { 00:15:41.295 "rw_ios_per_sec": 0, 00:15:41.295 "rw_mbytes_per_sec": 0, 00:15:41.295 "r_mbytes_per_sec": 0, 00:15:41.295 "w_mbytes_per_sec": 0 00:15:41.295 }, 00:15:41.295 "claimed": true, 00:15:41.295 "claim_type": "exclusive_write", 00:15:41.295 "zoned": false, 00:15:41.295 "supported_io_types": { 00:15:41.295 "read": true, 00:15:41.295 "write": true, 00:15:41.295 "unmap": true, 00:15:41.295 "write_zeroes": true, 00:15:41.295 "flush": true, 00:15:41.295 "reset": true, 00:15:41.295 "compare": false, 00:15:41.295 "compare_and_write": false, 00:15:41.295 "abort": true, 00:15:41.295 "nvme_admin": false, 00:15:41.295 "nvme_io": false 00:15:41.295 }, 00:15:41.295 "memory_domains": [ 00:15:41.295 { 00:15:41.295 "dma_device_id": "system", 00:15:41.295 "dma_device_type": 1 00:15:41.295 }, 00:15:41.295 { 00:15:41.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.296 "dma_device_type": 2 00:15:41.296 } 00:15:41.296 ], 00:15:41.296 "driver_specific": {} 00:15:41.296 } 00:15:41.296 ] 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.296 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.296 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.296 "name": "Existed_Raid", 00:15:41.296 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:41.296 "strip_size_kb": 64, 00:15:41.296 "state": "online", 00:15:41.296 "raid_level": "raid0", 00:15:41.296 "superblock": true, 00:15:41.296 "num_base_bdevs": 3, 00:15:41.296 "num_base_bdevs_discovered": 3, 00:15:41.296 "num_base_bdevs_operational": 3, 00:15:41.296 "base_bdevs_list": [ 00:15:41.296 { 00:15:41.296 "name": "NewBaseBdev", 00:15:41.296 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:41.296 "is_configured": true, 00:15:41.296 "data_offset": 2048, 00:15:41.296 "data_size": 63488 00:15:41.296 }, 00:15:41.296 { 00:15:41.296 "name": "BaseBdev2", 00:15:41.296 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:41.296 "is_configured": true, 00:15:41.296 "data_offset": 2048, 00:15:41.296 "data_size": 63488 00:15:41.296 }, 00:15:41.296 { 00:15:41.296 "name": "BaseBdev3", 00:15:41.296 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:41.296 "is_configured": true, 00:15:41.296 "data_offset": 2048, 00:15:41.296 "data_size": 63488 00:15:41.296 } 00:15:41.296 ] 00:15:41.296 }' 00:15:41.296 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.296 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:41.932 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:42.191 [2024-06-10 18:59:56.827273] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.191 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:42.191 "name": "Existed_Raid", 00:15:42.191 "aliases": [ 00:15:42.191 "c370f219-cecc-4dcf-8a41-99e8115d2d0c" 00:15:42.191 ], 00:15:42.191 "product_name": "Raid Volume", 00:15:42.191 "block_size": 512, 00:15:42.191 "num_blocks": 190464, 00:15:42.191 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:42.191 "assigned_rate_limits": { 00:15:42.191 "rw_ios_per_sec": 0, 00:15:42.191 "rw_mbytes_per_sec": 0, 00:15:42.191 "r_mbytes_per_sec": 0, 00:15:42.191 "w_mbytes_per_sec": 0 00:15:42.191 }, 00:15:42.191 "claimed": false, 00:15:42.191 "zoned": false, 00:15:42.191 "supported_io_types": { 00:15:42.191 "read": true, 00:15:42.191 "write": true, 00:15:42.191 "unmap": true, 00:15:42.191 "write_zeroes": true, 00:15:42.191 "flush": true, 00:15:42.191 "reset": true, 00:15:42.191 "compare": false, 00:15:42.191 "compare_and_write": false, 00:15:42.191 "abort": false, 00:15:42.191 "nvme_admin": false, 00:15:42.191 "nvme_io": false 00:15:42.191 }, 00:15:42.191 "memory_domains": [ 00:15:42.191 { 00:15:42.191 "dma_device_id": "system", 00:15:42.191 "dma_device_type": 1 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.191 "dma_device_type": 2 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "dma_device_id": "system", 00:15:42.191 "dma_device_type": 1 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.191 "dma_device_type": 2 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "dma_device_id": "system", 00:15:42.191 "dma_device_type": 1 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.191 "dma_device_type": 2 00:15:42.191 } 00:15:42.191 ], 00:15:42.191 "driver_specific": { 00:15:42.191 "raid": { 00:15:42.191 "uuid": "c370f219-cecc-4dcf-8a41-99e8115d2d0c", 00:15:42.191 "strip_size_kb": 64, 00:15:42.191 "state": "online", 00:15:42.191 "raid_level": "raid0", 00:15:42.191 "superblock": true, 00:15:42.191 "num_base_bdevs": 3, 00:15:42.191 "num_base_bdevs_discovered": 3, 00:15:42.191 "num_base_bdevs_operational": 3, 00:15:42.191 "base_bdevs_list": [ 00:15:42.191 { 00:15:42.191 "name": "NewBaseBdev", 00:15:42.191 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:42.191 "is_configured": true, 00:15:42.191 "data_offset": 2048, 00:15:42.191 "data_size": 63488 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "name": "BaseBdev2", 00:15:42.191 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:42.191 "is_configured": true, 00:15:42.191 "data_offset": 2048, 00:15:42.191 "data_size": 63488 00:15:42.191 }, 00:15:42.191 { 00:15:42.191 "name": "BaseBdev3", 00:15:42.191 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:42.191 "is_configured": true, 00:15:42.191 "data_offset": 2048, 00:15:42.191 "data_size": 63488 00:15:42.191 } 00:15:42.191 ] 00:15:42.191 } 00:15:42.191 } 00:15:42.191 }' 00:15:42.191 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.191 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:42.191 BaseBdev2 00:15:42.191 BaseBdev3' 00:15:42.191 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.191 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:42.191 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.450 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.450 "name": "NewBaseBdev", 00:15:42.450 "aliases": [ 00:15:42.450 "88e6d15f-c8c3-44b1-b7f6-c24351f97208" 00:15:42.450 ], 00:15:42.450 "product_name": "Malloc disk", 00:15:42.450 "block_size": 512, 00:15:42.450 "num_blocks": 65536, 00:15:42.450 "uuid": "88e6d15f-c8c3-44b1-b7f6-c24351f97208", 00:15:42.450 "assigned_rate_limits": { 00:15:42.450 "rw_ios_per_sec": 0, 00:15:42.450 "rw_mbytes_per_sec": 0, 00:15:42.450 "r_mbytes_per_sec": 0, 00:15:42.450 "w_mbytes_per_sec": 0 00:15:42.450 }, 00:15:42.450 "claimed": true, 00:15:42.450 "claim_type": "exclusive_write", 00:15:42.450 "zoned": false, 00:15:42.450 "supported_io_types": { 00:15:42.450 "read": true, 00:15:42.450 "write": true, 00:15:42.450 "unmap": true, 00:15:42.450 "write_zeroes": true, 00:15:42.450 "flush": true, 00:15:42.450 "reset": true, 00:15:42.450 "compare": false, 00:15:42.450 "compare_and_write": false, 00:15:42.450 "abort": true, 00:15:42.450 "nvme_admin": false, 00:15:42.450 "nvme_io": false 00:15:42.450 }, 00:15:42.450 "memory_domains": [ 00:15:42.450 { 00:15:42.450 "dma_device_id": "system", 00:15:42.450 "dma_device_type": 1 00:15:42.450 }, 00:15:42.450 { 00:15:42.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.450 "dma_device_type": 2 00:15:42.450 } 00:15:42.450 ], 00:15:42.450 "driver_specific": {} 00:15:42.450 }' 00:15:42.450 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.450 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:42.709 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.967 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.967 "name": "BaseBdev2", 00:15:42.967 "aliases": [ 00:15:42.967 "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91" 00:15:42.967 ], 00:15:42.967 "product_name": "Malloc disk", 00:15:42.967 "block_size": 512, 00:15:42.967 "num_blocks": 65536, 00:15:42.967 "uuid": "8c44e24d-a3db-40c8-9ffb-6e9f7862fa91", 00:15:42.967 "assigned_rate_limits": { 00:15:42.967 "rw_ios_per_sec": 0, 00:15:42.967 "rw_mbytes_per_sec": 0, 00:15:42.967 "r_mbytes_per_sec": 0, 00:15:42.967 "w_mbytes_per_sec": 0 00:15:42.967 }, 00:15:42.967 "claimed": true, 00:15:42.967 "claim_type": "exclusive_write", 00:15:42.967 "zoned": false, 00:15:42.967 "supported_io_types": { 00:15:42.967 "read": true, 00:15:42.967 "write": true, 00:15:42.967 "unmap": true, 00:15:42.967 "write_zeroes": true, 00:15:42.967 "flush": true, 00:15:42.967 "reset": true, 00:15:42.967 "compare": false, 00:15:42.967 "compare_and_write": false, 00:15:42.967 "abort": true, 00:15:42.967 "nvme_admin": false, 00:15:42.967 "nvme_io": false 00:15:42.967 }, 00:15:42.967 "memory_domains": [ 00:15:42.967 { 00:15:42.967 "dma_device_id": "system", 00:15:42.967 "dma_device_type": 1 00:15:42.967 }, 00:15:42.967 { 00:15:42.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.968 "dma_device_type": 2 00:15:42.968 } 00:15:42.968 ], 00:15:42.968 "driver_specific": {} 00:15:42.968 }' 00:15:42.968 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.968 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:43.226 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:43.484 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:43.484 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:43.484 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:43.484 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:43.484 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:43.484 "name": "BaseBdev3", 00:15:43.484 "aliases": [ 00:15:43.484 "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578" 00:15:43.484 ], 00:15:43.484 "product_name": "Malloc disk", 00:15:43.484 "block_size": 512, 00:15:43.484 "num_blocks": 65536, 00:15:43.484 "uuid": "33a5c9a7-068c-45e9-a0d7-f46e2fc5c578", 00:15:43.484 "assigned_rate_limits": { 00:15:43.484 "rw_ios_per_sec": 0, 00:15:43.484 "rw_mbytes_per_sec": 0, 00:15:43.484 "r_mbytes_per_sec": 0, 00:15:43.484 "w_mbytes_per_sec": 0 00:15:43.484 }, 00:15:43.484 "claimed": true, 00:15:43.484 "claim_type": "exclusive_write", 00:15:43.484 "zoned": false, 00:15:43.484 "supported_io_types": { 00:15:43.484 "read": true, 00:15:43.484 "write": true, 00:15:43.484 "unmap": true, 00:15:43.484 "write_zeroes": true, 00:15:43.484 "flush": true, 00:15:43.484 "reset": true, 00:15:43.485 "compare": false, 00:15:43.485 "compare_and_write": false, 00:15:43.485 "abort": true, 00:15:43.485 "nvme_admin": false, 00:15:43.485 "nvme_io": false 00:15:43.485 }, 00:15:43.485 "memory_domains": [ 00:15:43.485 { 00:15:43.485 "dma_device_id": "system", 00:15:43.485 "dma_device_type": 1 00:15:43.485 }, 00:15:43.485 { 00:15:43.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.485 "dma_device_type": 2 00:15:43.485 } 00:15:43.485 ], 00:15:43.485 "driver_specific": {} 00:15:43.485 }' 00:15:43.485 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:43.743 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:44.001 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:44.001 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:44.001 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:44.260 [2024-06-10 18:59:58.780334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.260 [2024-06-10 18:59:58.780359] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.260 [2024-06-10 18:59:58.780405] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.260 [2024-06-10 18:59:58.780451] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.260 [2024-06-10 18:59:58.780462] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2090fe0 name Existed_Raid, state offline 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1647959 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1647959 ']' 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1647959 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1647959 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1647959' 00:15:44.260 killing process with pid 1647959 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1647959 00:15:44.260 [2024-06-10 18:59:58.860520] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.260 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1647959 00:15:44.260 [2024-06-10 18:59:58.883609] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.519 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:44.519 00:15:44.519 real 0m26.674s 00:15:44.519 user 0m48.871s 00:15:44.519 sys 0m4.911s 00:15:44.519 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:44.519 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.519 ************************************ 00:15:44.519 END TEST raid_state_function_test_sb 00:15:44.519 ************************************ 00:15:44.519 18:59:59 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:44.519 18:59:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:15:44.519 18:59:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:44.519 18:59:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.519 ************************************ 00:15:44.519 START TEST raid_superblock_test 00:15:44.519 ************************************ 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 3 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1653116 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1653116 /var/tmp/spdk-raid.sock 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1653116 ']' 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:44.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:44.519 18:59:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.519 [2024-06-10 18:59:59.220485] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:15:44.519 [2024-06-10 18:59:59.220540] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653116 ] 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.0 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.1 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.2 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.3 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.4 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.5 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.6 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:01.7 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.0 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.1 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.2 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.3 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.4 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.5 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.6 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b6:02.7 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.0 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.1 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.2 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.3 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.4 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.5 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.6 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:01.7 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.0 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.1 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.2 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.3 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.4 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.5 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.6 cannot be used 00:15:44.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:44.779 EAL: Requested device 0000:b8:02.7 cannot be used 00:15:44.779 [2024-06-10 18:59:59.356316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.779 [2024-06-10 18:59:59.440294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.779 [2024-06-10 18:59:59.496716] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.779 [2024-06-10 18:59:59.496740] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:45.716 malloc1 00:15:45.716 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.716 [2024-06-10 19:00:00.464302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.716 [2024-06-10 19:00:00.464347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.716 [2024-06-10 19:00:00.464364] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa13b70 00:15:45.716 [2024-06-10 19:00:00.464376] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.716 [2024-06-10 19:00:00.465830] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.716 [2024-06-10 19:00:00.465860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.716 pt1 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.978 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:45.978 malloc2 00:15:45.979 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.244 [2024-06-10 19:00:00.921983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.244 [2024-06-10 19:00:00.922026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.244 [2024-06-10 19:00:00.922042] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa14f70 00:15:46.244 [2024-06-10 19:00:00.922053] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.244 [2024-06-10 19:00:00.923468] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.244 [2024-06-10 19:00:00.923497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.244 pt2 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.244 19:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:46.503 malloc3 00:15:46.503 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.761 [2024-06-10 19:00:01.379475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.761 [2024-06-10 19:00:01.379521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.761 [2024-06-10 19:00:01.379538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbab940 00:15:46.761 [2024-06-10 19:00:01.379550] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.761 [2024-06-10 19:00:01.380972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.761 [2024-06-10 19:00:01.381001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.761 pt3 00:15:46.761 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:46.761 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:46.761 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:47.017 [2024-06-10 19:00:01.596063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.017 [2024-06-10 19:00:01.597265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.017 [2024-06-10 19:00:01.597314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.017 [2024-06-10 19:00:01.597454] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa0c210 00:15:47.017 [2024-06-10 19:00:01.597465] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.017 [2024-06-10 19:00:01.597657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa13840 00:15:47.017 [2024-06-10 19:00:01.597788] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa0c210 00:15:47.017 [2024-06-10 19:00:01.597798] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa0c210 00:15:47.017 [2024-06-10 19:00:01.597892] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.017 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.276 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.276 "name": "raid_bdev1", 00:15:47.276 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:47.276 "strip_size_kb": 64, 00:15:47.276 "state": "online", 00:15:47.276 "raid_level": "raid0", 00:15:47.276 "superblock": true, 00:15:47.276 "num_base_bdevs": 3, 00:15:47.276 "num_base_bdevs_discovered": 3, 00:15:47.276 "num_base_bdevs_operational": 3, 00:15:47.276 "base_bdevs_list": [ 00:15:47.276 { 00:15:47.276 "name": "pt1", 00:15:47.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.276 "is_configured": true, 00:15:47.276 "data_offset": 2048, 00:15:47.276 "data_size": 63488 00:15:47.276 }, 00:15:47.276 { 00:15:47.276 "name": "pt2", 00:15:47.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.276 "is_configured": true, 00:15:47.276 "data_offset": 2048, 00:15:47.276 "data_size": 63488 00:15:47.276 }, 00:15:47.276 { 00:15:47.276 "name": "pt3", 00:15:47.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.276 "is_configured": true, 00:15:47.276 "data_offset": 2048, 00:15:47.276 "data_size": 63488 00:15:47.276 } 00:15:47.276 ] 00:15:47.276 }' 00:15:47.276 19:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.276 19:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:47.848 [2024-06-10 19:00:02.518711] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.848 "name": "raid_bdev1", 00:15:47.848 "aliases": [ 00:15:47.848 "a6cfc003-00ea-4aa5-81ee-7cce54093092" 00:15:47.848 ], 00:15:47.848 "product_name": "Raid Volume", 00:15:47.848 "block_size": 512, 00:15:47.848 "num_blocks": 190464, 00:15:47.848 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:47.848 "assigned_rate_limits": { 00:15:47.848 "rw_ios_per_sec": 0, 00:15:47.848 "rw_mbytes_per_sec": 0, 00:15:47.848 "r_mbytes_per_sec": 0, 00:15:47.848 "w_mbytes_per_sec": 0 00:15:47.848 }, 00:15:47.848 "claimed": false, 00:15:47.848 "zoned": false, 00:15:47.848 "supported_io_types": { 00:15:47.848 "read": true, 00:15:47.848 "write": true, 00:15:47.848 "unmap": true, 00:15:47.848 "write_zeroes": true, 00:15:47.848 "flush": true, 00:15:47.848 "reset": true, 00:15:47.848 "compare": false, 00:15:47.848 "compare_and_write": false, 00:15:47.848 "abort": false, 00:15:47.848 "nvme_admin": false, 00:15:47.848 "nvme_io": false 00:15:47.848 }, 00:15:47.848 "memory_domains": [ 00:15:47.848 { 00:15:47.848 "dma_device_id": "system", 00:15:47.848 "dma_device_type": 1 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.848 "dma_device_type": 2 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "dma_device_id": "system", 00:15:47.848 "dma_device_type": 1 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.848 "dma_device_type": 2 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "dma_device_id": "system", 00:15:47.848 "dma_device_type": 1 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.848 "dma_device_type": 2 00:15:47.848 } 00:15:47.848 ], 00:15:47.848 "driver_specific": { 00:15:47.848 "raid": { 00:15:47.848 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:47.848 "strip_size_kb": 64, 00:15:47.848 "state": "online", 00:15:47.848 "raid_level": "raid0", 00:15:47.848 "superblock": true, 00:15:47.848 "num_base_bdevs": 3, 00:15:47.848 "num_base_bdevs_discovered": 3, 00:15:47.848 "num_base_bdevs_operational": 3, 00:15:47.848 "base_bdevs_list": [ 00:15:47.848 { 00:15:47.848 "name": "pt1", 00:15:47.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.848 "is_configured": true, 00:15:47.848 "data_offset": 2048, 00:15:47.848 "data_size": 63488 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "name": "pt2", 00:15:47.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.848 "is_configured": true, 00:15:47.848 "data_offset": 2048, 00:15:47.848 "data_size": 63488 00:15:47.848 }, 00:15:47.848 { 00:15:47.848 "name": "pt3", 00:15:47.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.848 "is_configured": true, 00:15:47.848 "data_offset": 2048, 00:15:47.848 "data_size": 63488 00:15:47.848 } 00:15:47.848 ] 00:15:47.848 } 00:15:47.848 } 00:15:47.848 }' 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:47.848 pt2 00:15:47.848 pt3' 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:47.848 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.107 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.107 "name": "pt1", 00:15:48.107 "aliases": [ 00:15:48.107 "00000000-0000-0000-0000-000000000001" 00:15:48.107 ], 00:15:48.107 "product_name": "passthru", 00:15:48.107 "block_size": 512, 00:15:48.107 "num_blocks": 65536, 00:15:48.107 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.107 "assigned_rate_limits": { 00:15:48.107 "rw_ios_per_sec": 0, 00:15:48.107 "rw_mbytes_per_sec": 0, 00:15:48.107 "r_mbytes_per_sec": 0, 00:15:48.107 "w_mbytes_per_sec": 0 00:15:48.107 }, 00:15:48.107 "claimed": true, 00:15:48.107 "claim_type": "exclusive_write", 00:15:48.107 "zoned": false, 00:15:48.107 "supported_io_types": { 00:15:48.107 "read": true, 00:15:48.107 "write": true, 00:15:48.107 "unmap": true, 00:15:48.107 "write_zeroes": true, 00:15:48.107 "flush": true, 00:15:48.107 "reset": true, 00:15:48.107 "compare": false, 00:15:48.107 "compare_and_write": false, 00:15:48.107 "abort": true, 00:15:48.107 "nvme_admin": false, 00:15:48.107 "nvme_io": false 00:15:48.107 }, 00:15:48.107 "memory_domains": [ 00:15:48.107 { 00:15:48.107 "dma_device_id": "system", 00:15:48.107 "dma_device_type": 1 00:15:48.107 }, 00:15:48.107 { 00:15:48.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.107 "dma_device_type": 2 00:15:48.107 } 00:15:48.107 ], 00:15:48.107 "driver_specific": { 00:15:48.107 "passthru": { 00:15:48.107 "name": "pt1", 00:15:48.107 "base_bdev_name": "malloc1" 00:15:48.107 } 00:15:48.107 } 00:15:48.107 }' 00:15:48.107 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.107 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.365 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.365 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.365 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.365 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.365 19:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.365 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.365 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.365 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.365 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.628 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.628 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:48.628 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:48.628 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.628 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.628 "name": "pt2", 00:15:48.628 "aliases": [ 00:15:48.628 "00000000-0000-0000-0000-000000000002" 00:15:48.628 ], 00:15:48.628 "product_name": "passthru", 00:15:48.628 "block_size": 512, 00:15:48.628 "num_blocks": 65536, 00:15:48.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.628 "assigned_rate_limits": { 00:15:48.628 "rw_ios_per_sec": 0, 00:15:48.628 "rw_mbytes_per_sec": 0, 00:15:48.628 "r_mbytes_per_sec": 0, 00:15:48.628 "w_mbytes_per_sec": 0 00:15:48.628 }, 00:15:48.628 "claimed": true, 00:15:48.628 "claim_type": "exclusive_write", 00:15:48.628 "zoned": false, 00:15:48.628 "supported_io_types": { 00:15:48.628 "read": true, 00:15:48.628 "write": true, 00:15:48.628 "unmap": true, 00:15:48.628 "write_zeroes": true, 00:15:48.628 "flush": true, 00:15:48.628 "reset": true, 00:15:48.628 "compare": false, 00:15:48.628 "compare_and_write": false, 00:15:48.628 "abort": true, 00:15:48.628 "nvme_admin": false, 00:15:48.628 "nvme_io": false 00:15:48.628 }, 00:15:48.628 "memory_domains": [ 00:15:48.628 { 00:15:48.628 "dma_device_id": "system", 00:15:48.628 "dma_device_type": 1 00:15:48.628 }, 00:15:48.628 { 00:15:48.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.628 "dma_device_type": 2 00:15:48.628 } 00:15:48.628 ], 00:15:48.628 "driver_specific": { 00:15:48.628 "passthru": { 00:15:48.628 "name": "pt2", 00:15:48.628 "base_bdev_name": "malloc2" 00:15:48.628 } 00:15:48.628 } 00:15:48.628 }' 00:15:48.628 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.887 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.146 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.146 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.146 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.146 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:49.146 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.404 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.404 "name": "pt3", 00:15:49.404 "aliases": [ 00:15:49.404 "00000000-0000-0000-0000-000000000003" 00:15:49.404 ], 00:15:49.404 "product_name": "passthru", 00:15:49.404 "block_size": 512, 00:15:49.404 "num_blocks": 65536, 00:15:49.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.404 "assigned_rate_limits": { 00:15:49.404 "rw_ios_per_sec": 0, 00:15:49.404 "rw_mbytes_per_sec": 0, 00:15:49.404 "r_mbytes_per_sec": 0, 00:15:49.404 "w_mbytes_per_sec": 0 00:15:49.404 }, 00:15:49.404 "claimed": true, 00:15:49.404 "claim_type": "exclusive_write", 00:15:49.404 "zoned": false, 00:15:49.404 "supported_io_types": { 00:15:49.404 "read": true, 00:15:49.404 "write": true, 00:15:49.404 "unmap": true, 00:15:49.404 "write_zeroes": true, 00:15:49.404 "flush": true, 00:15:49.404 "reset": true, 00:15:49.404 "compare": false, 00:15:49.404 "compare_and_write": false, 00:15:49.404 "abort": true, 00:15:49.404 "nvme_admin": false, 00:15:49.404 "nvme_io": false 00:15:49.404 }, 00:15:49.404 "memory_domains": [ 00:15:49.404 { 00:15:49.404 "dma_device_id": "system", 00:15:49.404 "dma_device_type": 1 00:15:49.404 }, 00:15:49.404 { 00:15:49.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.404 "dma_device_type": 2 00:15:49.404 } 00:15:49.404 ], 00:15:49.404 "driver_specific": { 00:15:49.404 "passthru": { 00:15:49.404 "name": "pt3", 00:15:49.404 "base_bdev_name": "malloc3" 00:15:49.404 } 00:15:49.404 } 00:15:49.404 }' 00:15:49.404 19:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.405 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.405 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.405 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.405 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.405 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.405 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.663 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.663 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.664 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.664 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.664 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.664 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:49.664 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:49.922 [2024-06-10 19:00:04.499944] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.923 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a6cfc003-00ea-4aa5-81ee-7cce54093092 00:15:49.923 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a6cfc003-00ea-4aa5-81ee-7cce54093092 ']' 00:15:49.923 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:50.181 [2024-06-10 19:00:04.728302] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.181 [2024-06-10 19:00:04.728318] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.181 [2024-06-10 19:00:04.728364] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.181 [2024-06-10 19:00:04.728410] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.181 [2024-06-10 19:00:04.728420] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa0c210 name raid_bdev1, state offline 00:15:50.181 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.181 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:50.440 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:50.440 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:50.440 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.440 19:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:50.699 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.699 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:50.699 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.699 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:50.958 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:50.958 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:51.218 19:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:51.476 [2024-06-10 19:00:06.083823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:51.476 [2024-06-10 19:00:06.085088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:51.476 [2024-06-10 19:00:06.085129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:51.476 [2024-06-10 19:00:06.085171] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:51.476 [2024-06-10 19:00:06.085211] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:51.477 [2024-06-10 19:00:06.085233] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:51.477 [2024-06-10 19:00:06.085250] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.477 [2024-06-10 19:00:06.085259] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbb75d0 name raid_bdev1, state configuring 00:15:51.477 request: 00:15:51.477 { 00:15:51.477 "name": "raid_bdev1", 00:15:51.477 "raid_level": "raid0", 00:15:51.477 "base_bdevs": [ 00:15:51.477 "malloc1", 00:15:51.477 "malloc2", 00:15:51.477 "malloc3" 00:15:51.477 ], 00:15:51.477 "superblock": false, 00:15:51.477 "strip_size_kb": 64, 00:15:51.477 "method": "bdev_raid_create", 00:15:51.477 "req_id": 1 00:15:51.477 } 00:15:51.477 Got JSON-RPC error response 00:15:51.477 response: 00:15:51.477 { 00:15:51.477 "code": -17, 00:15:51.477 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:51.477 } 00:15:51.477 19:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:15:51.477 19:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.477 19:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:51.477 19:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.477 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.477 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:51.734 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:51.734 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:51.734 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.993 [2024-06-10 19:00:06.540970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.993 [2024-06-10 19:00:06.541015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.993 [2024-06-10 19:00:06.541032] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa13da0 00:15:51.993 [2024-06-10 19:00:06.541043] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.993 [2024-06-10 19:00:06.542537] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.993 [2024-06-10 19:00:06.542567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.993 [2024-06-10 19:00:06.542641] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.993 [2024-06-10 19:00:06.542665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.993 pt1 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.993 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.252 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.252 "name": "raid_bdev1", 00:15:52.252 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:52.252 "strip_size_kb": 64, 00:15:52.252 "state": "configuring", 00:15:52.252 "raid_level": "raid0", 00:15:52.252 "superblock": true, 00:15:52.252 "num_base_bdevs": 3, 00:15:52.252 "num_base_bdevs_discovered": 1, 00:15:52.252 "num_base_bdevs_operational": 3, 00:15:52.252 "base_bdevs_list": [ 00:15:52.252 { 00:15:52.252 "name": "pt1", 00:15:52.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.252 "is_configured": true, 00:15:52.252 "data_offset": 2048, 00:15:52.252 "data_size": 63488 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": null, 00:15:52.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.252 "is_configured": false, 00:15:52.252 "data_offset": 2048, 00:15:52.252 "data_size": 63488 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": null, 00:15:52.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.252 "is_configured": false, 00:15:52.252 "data_offset": 2048, 00:15:52.252 "data_size": 63488 00:15:52.252 } 00:15:52.252 ] 00:15:52.252 }' 00:15:52.252 19:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.252 19:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.819 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:15:52.819 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.819 [2024-06-10 19:00:07.559674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.819 [2024-06-10 19:00:07.559724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.819 [2024-06-10 19:00:07.559741] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbae0e0 00:15:52.819 [2024-06-10 19:00:07.559759] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.819 [2024-06-10 19:00:07.560079] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.819 [2024-06-10 19:00:07.560097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.819 [2024-06-10 19:00:07.560156] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.819 [2024-06-10 19:00:07.560174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.819 pt2 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:53.077 [2024-06-10 19:00:07.788279] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.077 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.336 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.336 "name": "raid_bdev1", 00:15:53.336 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:53.336 "strip_size_kb": 64, 00:15:53.336 "state": "configuring", 00:15:53.336 "raid_level": "raid0", 00:15:53.336 "superblock": true, 00:15:53.336 "num_base_bdevs": 3, 00:15:53.336 "num_base_bdevs_discovered": 1, 00:15:53.336 "num_base_bdevs_operational": 3, 00:15:53.336 "base_bdevs_list": [ 00:15:53.336 { 00:15:53.336 "name": "pt1", 00:15:53.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:53.336 "is_configured": true, 00:15:53.336 "data_offset": 2048, 00:15:53.336 "data_size": 63488 00:15:53.336 }, 00:15:53.336 { 00:15:53.336 "name": null, 00:15:53.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.336 "is_configured": false, 00:15:53.336 "data_offset": 2048, 00:15:53.336 "data_size": 63488 00:15:53.336 }, 00:15:53.336 { 00:15:53.336 "name": null, 00:15:53.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.336 "is_configured": false, 00:15:53.336 "data_offset": 2048, 00:15:53.336 "data_size": 63488 00:15:53.336 } 00:15:53.336 ] 00:15:53.336 }' 00:15:53.336 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.336 19:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.903 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:53.903 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:53.903 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.162 [2024-06-10 19:00:08.782901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.162 [2024-06-10 19:00:08.782947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.162 [2024-06-10 19:00:08.782965] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa0a4c0 00:15:54.162 [2024-06-10 19:00:08.782977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.162 [2024-06-10 19:00:08.783291] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.162 [2024-06-10 19:00:08.783314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.162 [2024-06-10 19:00:08.783375] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:54.162 [2024-06-10 19:00:08.783393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.162 pt2 00:15:54.162 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:54.162 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:54.162 19:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.421 [2024-06-10 19:00:09.007492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.421 [2024-06-10 19:00:09.007524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.421 [2024-06-10 19:00:09.007540] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa0a700 00:15:54.421 [2024-06-10 19:00:09.007551] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.421 [2024-06-10 19:00:09.007838] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.421 [2024-06-10 19:00:09.007855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.421 [2024-06-10 19:00:09.007903] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:54.421 [2024-06-10 19:00:09.007919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.421 [2024-06-10 19:00:09.008013] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa0ce90 00:15:54.421 [2024-06-10 19:00:09.008023] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:54.421 [2024-06-10 19:00:09.008177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbad2c0 00:15:54.421 [2024-06-10 19:00:09.008299] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa0ce90 00:15:54.421 [2024-06-10 19:00:09.008309] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa0ce90 00:15:54.421 [2024-06-10 19:00:09.008396] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.421 pt3 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.421 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.681 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.681 "name": "raid_bdev1", 00:15:54.681 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:54.681 "strip_size_kb": 64, 00:15:54.681 "state": "online", 00:15:54.681 "raid_level": "raid0", 00:15:54.681 "superblock": true, 00:15:54.681 "num_base_bdevs": 3, 00:15:54.681 "num_base_bdevs_discovered": 3, 00:15:54.681 "num_base_bdevs_operational": 3, 00:15:54.681 "base_bdevs_list": [ 00:15:54.681 { 00:15:54.681 "name": "pt1", 00:15:54.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.681 "is_configured": true, 00:15:54.681 "data_offset": 2048, 00:15:54.681 "data_size": 63488 00:15:54.681 }, 00:15:54.681 { 00:15:54.681 "name": "pt2", 00:15:54.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.681 "is_configured": true, 00:15:54.681 "data_offset": 2048, 00:15:54.681 "data_size": 63488 00:15:54.681 }, 00:15:54.681 { 00:15:54.681 "name": "pt3", 00:15:54.681 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.681 "is_configured": true, 00:15:54.681 "data_offset": 2048, 00:15:54.681 "data_size": 63488 00:15:54.681 } 00:15:54.681 ] 00:15:54.681 }' 00:15:54.681 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.681 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:55.248 [2024-06-10 19:00:09.938169] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:55.248 "name": "raid_bdev1", 00:15:55.248 "aliases": [ 00:15:55.248 "a6cfc003-00ea-4aa5-81ee-7cce54093092" 00:15:55.248 ], 00:15:55.248 "product_name": "Raid Volume", 00:15:55.248 "block_size": 512, 00:15:55.248 "num_blocks": 190464, 00:15:55.248 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:55.248 "assigned_rate_limits": { 00:15:55.248 "rw_ios_per_sec": 0, 00:15:55.248 "rw_mbytes_per_sec": 0, 00:15:55.248 "r_mbytes_per_sec": 0, 00:15:55.248 "w_mbytes_per_sec": 0 00:15:55.248 }, 00:15:55.248 "claimed": false, 00:15:55.248 "zoned": false, 00:15:55.248 "supported_io_types": { 00:15:55.248 "read": true, 00:15:55.248 "write": true, 00:15:55.248 "unmap": true, 00:15:55.248 "write_zeroes": true, 00:15:55.248 "flush": true, 00:15:55.248 "reset": true, 00:15:55.248 "compare": false, 00:15:55.248 "compare_and_write": false, 00:15:55.248 "abort": false, 00:15:55.248 "nvme_admin": false, 00:15:55.248 "nvme_io": false 00:15:55.248 }, 00:15:55.248 "memory_domains": [ 00:15:55.248 { 00:15:55.248 "dma_device_id": "system", 00:15:55.248 "dma_device_type": 1 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.248 "dma_device_type": 2 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "dma_device_id": "system", 00:15:55.248 "dma_device_type": 1 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.248 "dma_device_type": 2 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "dma_device_id": "system", 00:15:55.248 "dma_device_type": 1 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.248 "dma_device_type": 2 00:15:55.248 } 00:15:55.248 ], 00:15:55.248 "driver_specific": { 00:15:55.248 "raid": { 00:15:55.248 "uuid": "a6cfc003-00ea-4aa5-81ee-7cce54093092", 00:15:55.248 "strip_size_kb": 64, 00:15:55.248 "state": "online", 00:15:55.248 "raid_level": "raid0", 00:15:55.248 "superblock": true, 00:15:55.248 "num_base_bdevs": 3, 00:15:55.248 "num_base_bdevs_discovered": 3, 00:15:55.248 "num_base_bdevs_operational": 3, 00:15:55.248 "base_bdevs_list": [ 00:15:55.248 { 00:15:55.248 "name": "pt1", 00:15:55.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.248 "is_configured": true, 00:15:55.248 "data_offset": 2048, 00:15:55.248 "data_size": 63488 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "name": "pt2", 00:15:55.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.248 "is_configured": true, 00:15:55.248 "data_offset": 2048, 00:15:55.248 "data_size": 63488 00:15:55.248 }, 00:15:55.248 { 00:15:55.248 "name": "pt3", 00:15:55.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.248 "is_configured": true, 00:15:55.248 "data_offset": 2048, 00:15:55.248 "data_size": 63488 00:15:55.248 } 00:15:55.248 ] 00:15:55.248 } 00:15:55.248 } 00:15:55.248 }' 00:15:55.248 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.507 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:55.507 pt2 00:15:55.507 pt3' 00:15:55.507 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:55.507 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:55.507 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:55.507 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:55.507 "name": "pt1", 00:15:55.507 "aliases": [ 00:15:55.507 "00000000-0000-0000-0000-000000000001" 00:15:55.507 ], 00:15:55.507 "product_name": "passthru", 00:15:55.507 "block_size": 512, 00:15:55.507 "num_blocks": 65536, 00:15:55.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.507 "assigned_rate_limits": { 00:15:55.507 "rw_ios_per_sec": 0, 00:15:55.507 "rw_mbytes_per_sec": 0, 00:15:55.507 "r_mbytes_per_sec": 0, 00:15:55.507 "w_mbytes_per_sec": 0 00:15:55.507 }, 00:15:55.507 "claimed": true, 00:15:55.507 "claim_type": "exclusive_write", 00:15:55.507 "zoned": false, 00:15:55.507 "supported_io_types": { 00:15:55.507 "read": true, 00:15:55.507 "write": true, 00:15:55.507 "unmap": true, 00:15:55.507 "write_zeroes": true, 00:15:55.507 "flush": true, 00:15:55.507 "reset": true, 00:15:55.507 "compare": false, 00:15:55.507 "compare_and_write": false, 00:15:55.507 "abort": true, 00:15:55.507 "nvme_admin": false, 00:15:55.507 "nvme_io": false 00:15:55.507 }, 00:15:55.507 "memory_domains": [ 00:15:55.507 { 00:15:55.507 "dma_device_id": "system", 00:15:55.507 "dma_device_type": 1 00:15:55.507 }, 00:15:55.507 { 00:15:55.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.507 "dma_device_type": 2 00:15:55.507 } 00:15:55.507 ], 00:15:55.507 "driver_specific": { 00:15:55.507 "passthru": { 00:15:55.507 "name": "pt1", 00:15:55.507 "base_bdev_name": "malloc1" 00:15:55.507 } 00:15:55.507 } 00:15:55.507 }' 00:15:55.507 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:55.766 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.025 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.025 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.025 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:56.025 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.283 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.283 "name": "pt2", 00:15:56.283 "aliases": [ 00:15:56.283 "00000000-0000-0000-0000-000000000002" 00:15:56.283 ], 00:15:56.283 "product_name": "passthru", 00:15:56.283 "block_size": 512, 00:15:56.283 "num_blocks": 65536, 00:15:56.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.283 "assigned_rate_limits": { 00:15:56.283 "rw_ios_per_sec": 0, 00:15:56.283 "rw_mbytes_per_sec": 0, 00:15:56.283 "r_mbytes_per_sec": 0, 00:15:56.283 "w_mbytes_per_sec": 0 00:15:56.283 }, 00:15:56.283 "claimed": true, 00:15:56.283 "claim_type": "exclusive_write", 00:15:56.283 "zoned": false, 00:15:56.283 "supported_io_types": { 00:15:56.283 "read": true, 00:15:56.283 "write": true, 00:15:56.283 "unmap": true, 00:15:56.283 "write_zeroes": true, 00:15:56.283 "flush": true, 00:15:56.283 "reset": true, 00:15:56.283 "compare": false, 00:15:56.283 "compare_and_write": false, 00:15:56.283 "abort": true, 00:15:56.283 "nvme_admin": false, 00:15:56.283 "nvme_io": false 00:15:56.283 }, 00:15:56.283 "memory_domains": [ 00:15:56.283 { 00:15:56.283 "dma_device_id": "system", 00:15:56.283 "dma_device_type": 1 00:15:56.283 }, 00:15:56.283 { 00:15:56.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.283 "dma_device_type": 2 00:15:56.283 } 00:15:56.283 ], 00:15:56.283 "driver_specific": { 00:15:56.283 "passthru": { 00:15:56.283 "name": "pt2", 00:15:56.283 "base_bdev_name": "malloc2" 00:15:56.283 } 00:15:56.283 } 00:15:56.283 }' 00:15:56.283 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.283 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.283 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.284 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.284 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.284 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.284 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.284 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:56.543 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.801 "name": "pt3", 00:15:56.801 "aliases": [ 00:15:56.801 "00000000-0000-0000-0000-000000000003" 00:15:56.801 ], 00:15:56.801 "product_name": "passthru", 00:15:56.801 "block_size": 512, 00:15:56.801 "num_blocks": 65536, 00:15:56.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.801 "assigned_rate_limits": { 00:15:56.801 "rw_ios_per_sec": 0, 00:15:56.801 "rw_mbytes_per_sec": 0, 00:15:56.801 "r_mbytes_per_sec": 0, 00:15:56.801 "w_mbytes_per_sec": 0 00:15:56.801 }, 00:15:56.801 "claimed": true, 00:15:56.801 "claim_type": "exclusive_write", 00:15:56.801 "zoned": false, 00:15:56.801 "supported_io_types": { 00:15:56.801 "read": true, 00:15:56.801 "write": true, 00:15:56.801 "unmap": true, 00:15:56.801 "write_zeroes": true, 00:15:56.801 "flush": true, 00:15:56.801 "reset": true, 00:15:56.801 "compare": false, 00:15:56.801 "compare_and_write": false, 00:15:56.801 "abort": true, 00:15:56.801 "nvme_admin": false, 00:15:56.801 "nvme_io": false 00:15:56.801 }, 00:15:56.801 "memory_domains": [ 00:15:56.801 { 00:15:56.801 "dma_device_id": "system", 00:15:56.801 "dma_device_type": 1 00:15:56.801 }, 00:15:56.801 { 00:15:56.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.801 "dma_device_type": 2 00:15:56.801 } 00:15:56.801 ], 00:15:56.801 "driver_specific": { 00:15:56.801 "passthru": { 00:15:56.801 "name": "pt3", 00:15:56.801 "base_bdev_name": "malloc3" 00:15:56.801 } 00:15:56.801 } 00:15:56.801 }' 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.801 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:57.059 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:57.318 [2024-06-10 19:00:11.887313] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a6cfc003-00ea-4aa5-81ee-7cce54093092 '!=' a6cfc003-00ea-4aa5-81ee-7cce54093092 ']' 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1653116 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1653116 ']' 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1653116 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1653116 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1653116' 00:15:57.318 killing process with pid 1653116 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1653116 00:15:57.318 [2024-06-10 19:00:11.968049] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.318 [2024-06-10 19:00:11.968097] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.318 [2024-06-10 19:00:11.968142] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.318 [2024-06-10 19:00:11.968152] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa0ce90 name raid_bdev1, state offline 00:15:57.318 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1653116 00:15:57.318 [2024-06-10 19:00:11.991650] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.577 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:57.577 00:15:57.577 real 0m13.023s 00:15:57.577 user 0m23.385s 00:15:57.577 sys 0m2.417s 00:15:57.577 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:57.577 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.577 ************************************ 00:15:57.577 END TEST raid_superblock_test 00:15:57.577 ************************************ 00:15:57.577 19:00:12 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:15:57.577 19:00:12 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:57.577 19:00:12 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:57.577 19:00:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.577 ************************************ 00:15:57.577 START TEST raid_read_error_test 00:15:57.577 ************************************ 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 3 read 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.zlIH0EoVFy 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1656335 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1656335 /var/tmp/spdk-raid.sock 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1656335 ']' 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:57.577 19:00:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.837 [2024-06-10 19:00:12.343354] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:15:57.837 [2024-06-10 19:00:12.343412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656335 ] 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.0 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.1 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.2 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.3 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.4 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.5 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.6 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:01.7 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.0 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.1 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.2 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.3 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.4 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.5 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.6 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b6:02.7 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.0 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.1 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.2 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.3 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.4 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.5 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.6 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:01.7 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.0 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.1 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.2 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.3 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.4 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.5 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.6 cannot be used 00:15:57.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.837 EAL: Requested device 0000:b8:02.7 cannot be used 00:15:57.837 [2024-06-10 19:00:12.475408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.837 [2024-06-10 19:00:12.561886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.097 [2024-06-10 19:00:12.621605] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.097 [2024-06-10 19:00:12.621646] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.664 19:00:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:58.664 19:00:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:15:58.664 19:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:58.664 19:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.922 BaseBdev1_malloc 00:15:58.922 19:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:59.180 true 00:15:59.180 19:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:59.180 [2024-06-10 19:00:13.910728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:59.180 [2024-06-10 19:00:13.910766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.180 [2024-06-10 19:00:13.910784] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9cdd50 00:15:59.180 [2024-06-10 19:00:13.910796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.180 [2024-06-10 19:00:13.912367] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.180 [2024-06-10 19:00:13.912399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.180 BaseBdev1 00:15:59.180 19:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:59.180 19:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.438 BaseBdev2_malloc 00:15:59.438 19:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:59.697 true 00:15:59.697 19:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:59.966 [2024-06-10 19:00:14.596916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:59.966 [2024-06-10 19:00:14.596953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.966 [2024-06-10 19:00:14.596970] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9d32e0 00:15:59.966 [2024-06-10 19:00:14.596981] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.966 [2024-06-10 19:00:14.598299] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.966 [2024-06-10 19:00:14.598326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.966 BaseBdev2 00:15:59.966 19:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:59.966 19:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:00.293 BaseBdev3_malloc 00:16:00.293 19:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:00.293 true 00:16:00.552 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:00.552 [2024-06-10 19:00:15.254795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:00.552 [2024-06-10 19:00:15.254836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.552 [2024-06-10 19:00:15.254854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9d4fd0 00:16:00.552 [2024-06-10 19:00:15.254866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.552 [2024-06-10 19:00:15.256262] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.552 [2024-06-10 19:00:15.256291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:00.552 BaseBdev3 00:16:00.552 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:00.811 [2024-06-10 19:00:15.479414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.811 [2024-06-10 19:00:15.480600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.811 [2024-06-10 19:00:15.480664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.811 [2024-06-10 19:00:15.480855] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9d63f0 00:16:00.811 [2024-06-10 19:00:15.480866] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:00.811 [2024-06-10 19:00:15.481041] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8297e0 00:16:00.811 [2024-06-10 19:00:15.481177] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9d63f0 00:16:00.811 [2024-06-10 19:00:15.481187] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9d63f0 00:16:00.811 [2024-06-10 19:00:15.481286] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.811 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.070 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.070 "name": "raid_bdev1", 00:16:01.070 "uuid": "7609a56a-0d06-40ee-9fd8-0638830b693f", 00:16:01.070 "strip_size_kb": 64, 00:16:01.070 "state": "online", 00:16:01.070 "raid_level": "raid0", 00:16:01.070 "superblock": true, 00:16:01.070 "num_base_bdevs": 3, 00:16:01.070 "num_base_bdevs_discovered": 3, 00:16:01.070 "num_base_bdevs_operational": 3, 00:16:01.070 "base_bdevs_list": [ 00:16:01.070 { 00:16:01.070 "name": "BaseBdev1", 00:16:01.070 "uuid": "cba394d9-41bf-51f4-8318-f3f52b126af3", 00:16:01.070 "is_configured": true, 00:16:01.070 "data_offset": 2048, 00:16:01.070 "data_size": 63488 00:16:01.070 }, 00:16:01.070 { 00:16:01.070 "name": "BaseBdev2", 00:16:01.070 "uuid": "e2bfea58-3e87-5181-8932-bbd61dd5c16f", 00:16:01.070 "is_configured": true, 00:16:01.070 "data_offset": 2048, 00:16:01.070 "data_size": 63488 00:16:01.070 }, 00:16:01.070 { 00:16:01.070 "name": "BaseBdev3", 00:16:01.070 "uuid": "20e3b765-73b9-5fca-88c2-bf2d1fa33fa2", 00:16:01.070 "is_configured": true, 00:16:01.070 "data_offset": 2048, 00:16:01.070 "data_size": 63488 00:16:01.070 } 00:16:01.070 ] 00:16:01.070 }' 00:16:01.070 19:00:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.070 19:00:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.639 19:00:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:01.639 19:00:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:01.639 [2024-06-10 19:00:16.381986] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x526d40 00:16:02.575 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.833 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.092 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.092 "name": "raid_bdev1", 00:16:03.092 "uuid": "7609a56a-0d06-40ee-9fd8-0638830b693f", 00:16:03.092 "strip_size_kb": 64, 00:16:03.092 "state": "online", 00:16:03.092 "raid_level": "raid0", 00:16:03.092 "superblock": true, 00:16:03.092 "num_base_bdevs": 3, 00:16:03.092 "num_base_bdevs_discovered": 3, 00:16:03.092 "num_base_bdevs_operational": 3, 00:16:03.092 "base_bdevs_list": [ 00:16:03.092 { 00:16:03.092 "name": "BaseBdev1", 00:16:03.092 "uuid": "cba394d9-41bf-51f4-8318-f3f52b126af3", 00:16:03.092 "is_configured": true, 00:16:03.092 "data_offset": 2048, 00:16:03.092 "data_size": 63488 00:16:03.092 }, 00:16:03.092 { 00:16:03.092 "name": "BaseBdev2", 00:16:03.092 "uuid": "e2bfea58-3e87-5181-8932-bbd61dd5c16f", 00:16:03.092 "is_configured": true, 00:16:03.092 "data_offset": 2048, 00:16:03.092 "data_size": 63488 00:16:03.092 }, 00:16:03.092 { 00:16:03.092 "name": "BaseBdev3", 00:16:03.092 "uuid": "20e3b765-73b9-5fca-88c2-bf2d1fa33fa2", 00:16:03.092 "is_configured": true, 00:16:03.092 "data_offset": 2048, 00:16:03.092 "data_size": 63488 00:16:03.092 } 00:16:03.092 ] 00:16:03.092 }' 00:16:03.092 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.092 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.660 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:03.919 [2024-06-10 19:00:18.423917] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.919 [2024-06-10 19:00:18.423947] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.919 [2024-06-10 19:00:18.426916] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.919 [2024-06-10 19:00:18.426948] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.919 [2024-06-10 19:00:18.426978] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.919 [2024-06-10 19:00:18.426988] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9d63f0 name raid_bdev1, state offline 00:16:03.920 0 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1656335 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1656335 ']' 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1656335 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1656335 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1656335' 00:16:03.920 killing process with pid 1656335 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1656335 00:16:03.920 [2024-06-10 19:00:18.501839] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.920 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1656335 00:16:03.920 [2024-06-10 19:00:18.519725] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.zlIH0EoVFy 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:16:04.181 00:16:04.181 real 0m6.455s 00:16:04.181 user 0m10.070s 00:16:04.181 sys 0m1.183s 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:04.181 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.181 ************************************ 00:16:04.181 END TEST raid_read_error_test 00:16:04.181 ************************************ 00:16:04.181 19:00:18 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:16:04.181 19:00:18 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:04.181 19:00:18 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:04.181 19:00:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.181 ************************************ 00:16:04.181 START TEST raid_write_error_test 00:16:04.181 ************************************ 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 3 write 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.181 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0ehp35YiLh 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1657524 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1657524 /var/tmp/spdk-raid.sock 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1657524 ']' 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:04.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:04.182 19:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.182 [2024-06-10 19:00:18.884776] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:16:04.182 [2024-06-10 19:00:18.884833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657524 ] 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.0 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.1 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.2 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.3 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.4 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.5 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.6 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:01.7 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:02.0 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:02.1 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.441 EAL: Requested device 0000:b6:02.2 cannot be used 00:16:04.441 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b6:02.3 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b6:02.4 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b6:02.5 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b6:02.6 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b6:02.7 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.0 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.1 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.2 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.3 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.4 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.5 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.6 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:01.7 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.0 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.1 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.2 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.3 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.4 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.5 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.6 cannot be used 00:16:04.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:04.442 EAL: Requested device 0000:b8:02.7 cannot be used 00:16:04.442 [2024-06-10 19:00:19.020875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.442 [2024-06-10 19:00:19.107288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.442 [2024-06-10 19:00:19.173316] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.442 [2024-06-10 19:00:19.173352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.378 19:00:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:05.378 19:00:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:16:05.378 19:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:05.378 19:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:05.378 BaseBdev1_malloc 00:16:05.378 19:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:05.636 true 00:16:05.636 19:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:05.895 [2024-06-10 19:00:20.411543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:05.895 [2024-06-10 19:00:20.411584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.895 [2024-06-10 19:00:20.411602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1975d50 00:16:05.895 [2024-06-10 19:00:20.411613] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.895 [2024-06-10 19:00:20.413085] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.895 [2024-06-10 19:00:20.413111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:05.895 BaseBdev1 00:16:05.895 19:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:05.895 19:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:06.154 BaseBdev2_malloc 00:16:06.154 19:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:06.154 true 00:16:06.155 19:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:06.419 [2024-06-10 19:00:21.101582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:06.419 [2024-06-10 19:00:21.101623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.419 [2024-06-10 19:00:21.101641] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197b2e0 00:16:06.419 [2024-06-10 19:00:21.101653] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.419 [2024-06-10 19:00:21.103040] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.419 [2024-06-10 19:00:21.103071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:06.419 BaseBdev2 00:16:06.419 19:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:06.419 19:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:06.679 BaseBdev3_malloc 00:16:06.679 19:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:06.938 true 00:16:06.938 19:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:07.197 [2024-06-10 19:00:21.779721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:07.197 [2024-06-10 19:00:21.779759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.197 [2024-06-10 19:00:21.779776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x197cfd0 00:16:07.197 [2024-06-10 19:00:21.779788] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.197 [2024-06-10 19:00:21.781154] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.197 [2024-06-10 19:00:21.781179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:07.197 BaseBdev3 00:16:07.197 19:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:07.457 [2024-06-10 19:00:22.000332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.457 [2024-06-10 19:00:22.001500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.457 [2024-06-10 19:00:22.001564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.457 [2024-06-10 19:00:22.001762] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x197e3f0 00:16:07.457 [2024-06-10 19:00:22.001773] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:07.457 [2024-06-10 19:00:22.001945] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17d17e0 00:16:07.457 [2024-06-10 19:00:22.002082] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x197e3f0 00:16:07.457 [2024-06-10 19:00:22.002092] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x197e3f0 00:16:07.457 [2024-06-10 19:00:22.002184] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.457 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.716 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.716 "name": "raid_bdev1", 00:16:07.716 "uuid": "7ca5bd2a-19f2-4fd4-9873-e0a3091808cc", 00:16:07.716 "strip_size_kb": 64, 00:16:07.716 "state": "online", 00:16:07.716 "raid_level": "raid0", 00:16:07.716 "superblock": true, 00:16:07.716 "num_base_bdevs": 3, 00:16:07.716 "num_base_bdevs_discovered": 3, 00:16:07.716 "num_base_bdevs_operational": 3, 00:16:07.716 "base_bdevs_list": [ 00:16:07.716 { 00:16:07.716 "name": "BaseBdev1", 00:16:07.716 "uuid": "b2501cbc-1df7-55d0-bceb-416625890cfb", 00:16:07.716 "is_configured": true, 00:16:07.716 "data_offset": 2048, 00:16:07.716 "data_size": 63488 00:16:07.716 }, 00:16:07.716 { 00:16:07.716 "name": "BaseBdev2", 00:16:07.716 "uuid": "565fa81f-ca10-5b3f-8d6b-ea96a8188697", 00:16:07.716 "is_configured": true, 00:16:07.716 "data_offset": 2048, 00:16:07.716 "data_size": 63488 00:16:07.716 }, 00:16:07.716 { 00:16:07.716 "name": "BaseBdev3", 00:16:07.716 "uuid": "b721a393-1d0b-5429-9821-a1fdb31948df", 00:16:07.716 "is_configured": true, 00:16:07.716 "data_offset": 2048, 00:16:07.716 "data_size": 63488 00:16:07.716 } 00:16:07.716 ] 00:16:07.716 }' 00:16:07.716 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.716 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.282 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:08.283 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:08.283 [2024-06-10 19:00:22.930994] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14ced40 00:16:09.236 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.494 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.752 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.752 "name": "raid_bdev1", 00:16:09.752 "uuid": "7ca5bd2a-19f2-4fd4-9873-e0a3091808cc", 00:16:09.752 "strip_size_kb": 64, 00:16:09.752 "state": "online", 00:16:09.752 "raid_level": "raid0", 00:16:09.752 "superblock": true, 00:16:09.752 "num_base_bdevs": 3, 00:16:09.752 "num_base_bdevs_discovered": 3, 00:16:09.752 "num_base_bdevs_operational": 3, 00:16:09.752 "base_bdevs_list": [ 00:16:09.752 { 00:16:09.752 "name": "BaseBdev1", 00:16:09.752 "uuid": "b2501cbc-1df7-55d0-bceb-416625890cfb", 00:16:09.752 "is_configured": true, 00:16:09.752 "data_offset": 2048, 00:16:09.752 "data_size": 63488 00:16:09.752 }, 00:16:09.752 { 00:16:09.752 "name": "BaseBdev2", 00:16:09.752 "uuid": "565fa81f-ca10-5b3f-8d6b-ea96a8188697", 00:16:09.752 "is_configured": true, 00:16:09.752 "data_offset": 2048, 00:16:09.752 "data_size": 63488 00:16:09.752 }, 00:16:09.752 { 00:16:09.752 "name": "BaseBdev3", 00:16:09.752 "uuid": "b721a393-1d0b-5429-9821-a1fdb31948df", 00:16:09.752 "is_configured": true, 00:16:09.752 "data_offset": 2048, 00:16:09.752 "data_size": 63488 00:16:09.752 } 00:16:09.752 ] 00:16:09.752 }' 00:16:09.752 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.752 19:00:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.327 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:10.587 [2024-06-10 19:00:25.089543] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.587 [2024-06-10 19:00:25.089574] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.587 [2024-06-10 19:00:25.092544] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.587 [2024-06-10 19:00:25.092590] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.587 [2024-06-10 19:00:25.092623] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.587 [2024-06-10 19:00:25.092633] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x197e3f0 name raid_bdev1, state offline 00:16:10.587 0 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1657524 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1657524 ']' 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1657524 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1657524 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1657524' 00:16:10.587 killing process with pid 1657524 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1657524 00:16:10.587 [2024-06-10 19:00:25.164795] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.587 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1657524 00:16:10.587 [2024-06-10 19:00:25.183007] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0ehp35YiLh 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:16:10.847 00:16:10.847 real 0m6.582s 00:16:10.847 user 0m10.329s 00:16:10.847 sys 0m1.188s 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:10.847 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.847 ************************************ 00:16:10.847 END TEST raid_write_error_test 00:16:10.847 ************************************ 00:16:10.847 19:00:25 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:10.847 19:00:25 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:10.847 19:00:25 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:10.847 19:00:25 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:10.847 19:00:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.847 ************************************ 00:16:10.847 START TEST raid_state_function_test 00:16:10.847 ************************************ 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 3 false 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1658692 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1658692' 00:16:10.847 Process raid pid: 1658692 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1658692 /var/tmp/spdk-raid.sock 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1658692 ']' 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:10.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:10.847 19:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.847 [2024-06-10 19:00:25.547325] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:16:10.847 [2024-06-10 19:00:25.547380] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.107 EAL: Requested device 0000:b6:01.0 cannot be used 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.107 EAL: Requested device 0000:b6:01.1 cannot be used 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.107 EAL: Requested device 0000:b6:01.2 cannot be used 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.107 EAL: Requested device 0000:b6:01.3 cannot be used 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.107 EAL: Requested device 0000:b6:01.4 cannot be used 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.107 EAL: Requested device 0000:b6:01.5 cannot be used 00:16:11.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:01.6 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:01.7 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.0 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.1 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.2 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.3 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.4 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.5 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.6 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b6:02.7 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.0 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.1 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.2 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.3 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.4 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.5 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.6 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:01.7 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.0 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.1 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.2 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.3 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.4 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.5 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.6 cannot be used 00:16:11.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:11.108 EAL: Requested device 0000:b8:02.7 cannot be used 00:16:11.108 [2024-06-10 19:00:25.681562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.108 [2024-06-10 19:00:25.767591] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.108 [2024-06-10 19:00:25.825549] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.108 [2024-06-10 19:00:25.825571] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:12.045 [2024-06-10 19:00:26.651397] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.045 [2024-06-10 19:00:26.651436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.045 [2024-06-10 19:00:26.651446] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.045 [2024-06-10 19:00:26.651457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.045 [2024-06-10 19:00:26.651466] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.045 [2024-06-10 19:00:26.651476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.045 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.046 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.046 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.304 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.304 "name": "Existed_Raid", 00:16:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.304 "strip_size_kb": 64, 00:16:12.304 "state": "configuring", 00:16:12.304 "raid_level": "concat", 00:16:12.304 "superblock": false, 00:16:12.304 "num_base_bdevs": 3, 00:16:12.304 "num_base_bdevs_discovered": 0, 00:16:12.304 "num_base_bdevs_operational": 3, 00:16:12.304 "base_bdevs_list": [ 00:16:12.304 { 00:16:12.304 "name": "BaseBdev1", 00:16:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.304 "is_configured": false, 00:16:12.304 "data_offset": 0, 00:16:12.304 "data_size": 0 00:16:12.304 }, 00:16:12.304 { 00:16:12.304 "name": "BaseBdev2", 00:16:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.304 "is_configured": false, 00:16:12.304 "data_offset": 0, 00:16:12.304 "data_size": 0 00:16:12.304 }, 00:16:12.304 { 00:16:12.304 "name": "BaseBdev3", 00:16:12.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.304 "is_configured": false, 00:16:12.304 "data_offset": 0, 00:16:12.304 "data_size": 0 00:16:12.304 } 00:16:12.304 ] 00:16:12.304 }' 00:16:12.304 19:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.304 19:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.873 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:13.132 [2024-06-10 19:00:27.637867] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.132 [2024-06-10 19:00:27.637896] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1397f30 name Existed_Raid, state configuring 00:16:13.132 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:13.132 [2024-06-10 19:00:27.862467] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.132 [2024-06-10 19:00:27.862496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.132 [2024-06-10 19:00:27.862505] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.132 [2024-06-10 19:00:27.862516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.132 [2024-06-10 19:00:27.862524] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.132 [2024-06-10 19:00:27.862534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.132 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:13.389 [2024-06-10 19:00:28.084414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.389 BaseBdev1 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:13.389 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.647 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:13.906 [ 00:16:13.906 { 00:16:13.906 "name": "BaseBdev1", 00:16:13.906 "aliases": [ 00:16:13.906 "76875cfc-0674-42a4-949f-b36b8fdf08cb" 00:16:13.906 ], 00:16:13.906 "product_name": "Malloc disk", 00:16:13.906 "block_size": 512, 00:16:13.906 "num_blocks": 65536, 00:16:13.906 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:13.906 "assigned_rate_limits": { 00:16:13.906 "rw_ios_per_sec": 0, 00:16:13.906 "rw_mbytes_per_sec": 0, 00:16:13.906 "r_mbytes_per_sec": 0, 00:16:13.906 "w_mbytes_per_sec": 0 00:16:13.906 }, 00:16:13.906 "claimed": true, 00:16:13.906 "claim_type": "exclusive_write", 00:16:13.906 "zoned": false, 00:16:13.906 "supported_io_types": { 00:16:13.906 "read": true, 00:16:13.906 "write": true, 00:16:13.906 "unmap": true, 00:16:13.906 "write_zeroes": true, 00:16:13.906 "flush": true, 00:16:13.906 "reset": true, 00:16:13.906 "compare": false, 00:16:13.906 "compare_and_write": false, 00:16:13.906 "abort": true, 00:16:13.906 "nvme_admin": false, 00:16:13.906 "nvme_io": false 00:16:13.906 }, 00:16:13.906 "memory_domains": [ 00:16:13.906 { 00:16:13.906 "dma_device_id": "system", 00:16:13.906 "dma_device_type": 1 00:16:13.906 }, 00:16:13.906 { 00:16:13.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.906 "dma_device_type": 2 00:16:13.906 } 00:16:13.906 ], 00:16:13.906 "driver_specific": {} 00:16:13.906 } 00:16:13.906 ] 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.906 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.164 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.164 "name": "Existed_Raid", 00:16:14.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.164 "strip_size_kb": 64, 00:16:14.164 "state": "configuring", 00:16:14.164 "raid_level": "concat", 00:16:14.164 "superblock": false, 00:16:14.164 "num_base_bdevs": 3, 00:16:14.164 "num_base_bdevs_discovered": 1, 00:16:14.164 "num_base_bdevs_operational": 3, 00:16:14.164 "base_bdevs_list": [ 00:16:14.164 { 00:16:14.164 "name": "BaseBdev1", 00:16:14.164 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:14.164 "is_configured": true, 00:16:14.164 "data_offset": 0, 00:16:14.164 "data_size": 65536 00:16:14.164 }, 00:16:14.164 { 00:16:14.164 "name": "BaseBdev2", 00:16:14.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.164 "is_configured": false, 00:16:14.164 "data_offset": 0, 00:16:14.164 "data_size": 0 00:16:14.164 }, 00:16:14.164 { 00:16:14.164 "name": "BaseBdev3", 00:16:14.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.164 "is_configured": false, 00:16:14.164 "data_offset": 0, 00:16:14.164 "data_size": 0 00:16:14.164 } 00:16:14.164 ] 00:16:14.164 }' 00:16:14.164 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.164 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.737 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:14.995 [2024-06-10 19:00:29.516163] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.995 [2024-06-10 19:00:29.516196] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1397800 name Existed_Raid, state configuring 00:16:14.995 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:14.995 [2024-06-10 19:00:29.744793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.995 [2024-06-10 19:00:29.746170] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.995 [2024-06-10 19:00:29.746200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.995 [2024-06-10 19:00:29.746210] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.995 [2024-06-10 19:00:29.746221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:15.254 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.255 "name": "Existed_Raid", 00:16:15.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.255 "strip_size_kb": 64, 00:16:15.255 "state": "configuring", 00:16:15.255 "raid_level": "concat", 00:16:15.255 "superblock": false, 00:16:15.255 "num_base_bdevs": 3, 00:16:15.255 "num_base_bdevs_discovered": 1, 00:16:15.255 "num_base_bdevs_operational": 3, 00:16:15.255 "base_bdevs_list": [ 00:16:15.255 { 00:16:15.255 "name": "BaseBdev1", 00:16:15.255 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:15.255 "is_configured": true, 00:16:15.255 "data_offset": 0, 00:16:15.255 "data_size": 65536 00:16:15.255 }, 00:16:15.255 { 00:16:15.255 "name": "BaseBdev2", 00:16:15.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.255 "is_configured": false, 00:16:15.255 "data_offset": 0, 00:16:15.255 "data_size": 0 00:16:15.255 }, 00:16:15.255 { 00:16:15.255 "name": "BaseBdev3", 00:16:15.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.255 "is_configured": false, 00:16:15.255 "data_offset": 0, 00:16:15.255 "data_size": 0 00:16:15.255 } 00:16:15.255 ] 00:16:15.255 }' 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.255 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.821 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:16.079 [2024-06-10 19:00:30.778673] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.079 BaseBdev2 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:16.079 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.338 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:16.597 [ 00:16:16.597 { 00:16:16.597 "name": "BaseBdev2", 00:16:16.597 "aliases": [ 00:16:16.597 "9aefbf45-543f-43f0-a71c-430871aa554b" 00:16:16.597 ], 00:16:16.597 "product_name": "Malloc disk", 00:16:16.597 "block_size": 512, 00:16:16.597 "num_blocks": 65536, 00:16:16.597 "uuid": "9aefbf45-543f-43f0-a71c-430871aa554b", 00:16:16.597 "assigned_rate_limits": { 00:16:16.597 "rw_ios_per_sec": 0, 00:16:16.597 "rw_mbytes_per_sec": 0, 00:16:16.597 "r_mbytes_per_sec": 0, 00:16:16.597 "w_mbytes_per_sec": 0 00:16:16.597 }, 00:16:16.597 "claimed": true, 00:16:16.597 "claim_type": "exclusive_write", 00:16:16.597 "zoned": false, 00:16:16.597 "supported_io_types": { 00:16:16.597 "read": true, 00:16:16.597 "write": true, 00:16:16.597 "unmap": true, 00:16:16.597 "write_zeroes": true, 00:16:16.597 "flush": true, 00:16:16.597 "reset": true, 00:16:16.597 "compare": false, 00:16:16.597 "compare_and_write": false, 00:16:16.597 "abort": true, 00:16:16.597 "nvme_admin": false, 00:16:16.597 "nvme_io": false 00:16:16.597 }, 00:16:16.597 "memory_domains": [ 00:16:16.597 { 00:16:16.597 "dma_device_id": "system", 00:16:16.597 "dma_device_type": 1 00:16:16.597 }, 00:16:16.597 { 00:16:16.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.597 "dma_device_type": 2 00:16:16.597 } 00:16:16.597 ], 00:16:16.597 "driver_specific": {} 00:16:16.597 } 00:16:16.597 ] 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.597 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.856 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.856 "name": "Existed_Raid", 00:16:16.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.856 "strip_size_kb": 64, 00:16:16.856 "state": "configuring", 00:16:16.856 "raid_level": "concat", 00:16:16.856 "superblock": false, 00:16:16.856 "num_base_bdevs": 3, 00:16:16.856 "num_base_bdevs_discovered": 2, 00:16:16.856 "num_base_bdevs_operational": 3, 00:16:16.856 "base_bdevs_list": [ 00:16:16.856 { 00:16:16.856 "name": "BaseBdev1", 00:16:16.856 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:16.856 "is_configured": true, 00:16:16.856 "data_offset": 0, 00:16:16.856 "data_size": 65536 00:16:16.856 }, 00:16:16.856 { 00:16:16.856 "name": "BaseBdev2", 00:16:16.856 "uuid": "9aefbf45-543f-43f0-a71c-430871aa554b", 00:16:16.856 "is_configured": true, 00:16:16.856 "data_offset": 0, 00:16:16.856 "data_size": 65536 00:16:16.856 }, 00:16:16.856 { 00:16:16.856 "name": "BaseBdev3", 00:16:16.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.856 "is_configured": false, 00:16:16.856 "data_offset": 0, 00:16:16.856 "data_size": 0 00:16:16.856 } 00:16:16.856 ] 00:16:16.856 }' 00:16:16.856 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.856 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.424 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:17.683 [2024-06-10 19:00:32.285830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.683 [2024-06-10 19:00:32.285861] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13986f0 00:16:17.683 [2024-06-10 19:00:32.285869] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:17.683 [2024-06-10 19:00:32.286043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13983c0 00:16:17.683 [2024-06-10 19:00:32.286152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13986f0 00:16:17.683 [2024-06-10 19:00:32.286161] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x13986f0 00:16:17.683 [2024-06-10 19:00:32.286303] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.683 BaseBdev3 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:17.683 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.946 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.204 [ 00:16:18.204 { 00:16:18.204 "name": "BaseBdev3", 00:16:18.204 "aliases": [ 00:16:18.204 "b9923d88-019e-4688-88d0-9b9b3ebbc5de" 00:16:18.204 ], 00:16:18.204 "product_name": "Malloc disk", 00:16:18.204 "block_size": 512, 00:16:18.204 "num_blocks": 65536, 00:16:18.204 "uuid": "b9923d88-019e-4688-88d0-9b9b3ebbc5de", 00:16:18.204 "assigned_rate_limits": { 00:16:18.204 "rw_ios_per_sec": 0, 00:16:18.204 "rw_mbytes_per_sec": 0, 00:16:18.204 "r_mbytes_per_sec": 0, 00:16:18.204 "w_mbytes_per_sec": 0 00:16:18.204 }, 00:16:18.204 "claimed": true, 00:16:18.204 "claim_type": "exclusive_write", 00:16:18.204 "zoned": false, 00:16:18.204 "supported_io_types": { 00:16:18.204 "read": true, 00:16:18.204 "write": true, 00:16:18.204 "unmap": true, 00:16:18.204 "write_zeroes": true, 00:16:18.204 "flush": true, 00:16:18.204 "reset": true, 00:16:18.204 "compare": false, 00:16:18.204 "compare_and_write": false, 00:16:18.204 "abort": true, 00:16:18.204 "nvme_admin": false, 00:16:18.204 "nvme_io": false 00:16:18.205 }, 00:16:18.205 "memory_domains": [ 00:16:18.205 { 00:16:18.205 "dma_device_id": "system", 00:16:18.205 "dma_device_type": 1 00:16:18.205 }, 00:16:18.205 { 00:16:18.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.205 "dma_device_type": 2 00:16:18.205 } 00:16:18.205 ], 00:16:18.205 "driver_specific": {} 00:16:18.205 } 00:16:18.205 ] 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.205 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.463 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.463 "name": "Existed_Raid", 00:16:18.463 "uuid": "314adff2-ad8f-470c-a0f6-9b8ab761b8a4", 00:16:18.463 "strip_size_kb": 64, 00:16:18.463 "state": "online", 00:16:18.464 "raid_level": "concat", 00:16:18.464 "superblock": false, 00:16:18.464 "num_base_bdevs": 3, 00:16:18.464 "num_base_bdevs_discovered": 3, 00:16:18.464 "num_base_bdevs_operational": 3, 00:16:18.464 "base_bdevs_list": [ 00:16:18.464 { 00:16:18.464 "name": "BaseBdev1", 00:16:18.464 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:18.464 "is_configured": true, 00:16:18.464 "data_offset": 0, 00:16:18.464 "data_size": 65536 00:16:18.464 }, 00:16:18.464 { 00:16:18.464 "name": "BaseBdev2", 00:16:18.464 "uuid": "9aefbf45-543f-43f0-a71c-430871aa554b", 00:16:18.464 "is_configured": true, 00:16:18.464 "data_offset": 0, 00:16:18.464 "data_size": 65536 00:16:18.464 }, 00:16:18.464 { 00:16:18.464 "name": "BaseBdev3", 00:16:18.464 "uuid": "b9923d88-019e-4688-88d0-9b9b3ebbc5de", 00:16:18.464 "is_configured": true, 00:16:18.464 "data_offset": 0, 00:16:18.464 "data_size": 65536 00:16:18.464 } 00:16:18.464 ] 00:16:18.464 }' 00:16:18.464 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.464 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:19.032 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:19.032 [2024-06-10 19:00:33.782032] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.292 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:19.292 "name": "Existed_Raid", 00:16:19.292 "aliases": [ 00:16:19.292 "314adff2-ad8f-470c-a0f6-9b8ab761b8a4" 00:16:19.292 ], 00:16:19.292 "product_name": "Raid Volume", 00:16:19.292 "block_size": 512, 00:16:19.292 "num_blocks": 196608, 00:16:19.292 "uuid": "314adff2-ad8f-470c-a0f6-9b8ab761b8a4", 00:16:19.292 "assigned_rate_limits": { 00:16:19.292 "rw_ios_per_sec": 0, 00:16:19.292 "rw_mbytes_per_sec": 0, 00:16:19.292 "r_mbytes_per_sec": 0, 00:16:19.292 "w_mbytes_per_sec": 0 00:16:19.292 }, 00:16:19.292 "claimed": false, 00:16:19.292 "zoned": false, 00:16:19.292 "supported_io_types": { 00:16:19.292 "read": true, 00:16:19.292 "write": true, 00:16:19.292 "unmap": true, 00:16:19.292 "write_zeroes": true, 00:16:19.292 "flush": true, 00:16:19.292 "reset": true, 00:16:19.292 "compare": false, 00:16:19.292 "compare_and_write": false, 00:16:19.292 "abort": false, 00:16:19.292 "nvme_admin": false, 00:16:19.292 "nvme_io": false 00:16:19.292 }, 00:16:19.292 "memory_domains": [ 00:16:19.292 { 00:16:19.292 "dma_device_id": "system", 00:16:19.292 "dma_device_type": 1 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.292 "dma_device_type": 2 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "dma_device_id": "system", 00:16:19.292 "dma_device_type": 1 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.292 "dma_device_type": 2 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "dma_device_id": "system", 00:16:19.292 "dma_device_type": 1 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.292 "dma_device_type": 2 00:16:19.292 } 00:16:19.292 ], 00:16:19.292 "driver_specific": { 00:16:19.292 "raid": { 00:16:19.292 "uuid": "314adff2-ad8f-470c-a0f6-9b8ab761b8a4", 00:16:19.292 "strip_size_kb": 64, 00:16:19.292 "state": "online", 00:16:19.292 "raid_level": "concat", 00:16:19.292 "superblock": false, 00:16:19.292 "num_base_bdevs": 3, 00:16:19.292 "num_base_bdevs_discovered": 3, 00:16:19.292 "num_base_bdevs_operational": 3, 00:16:19.292 "base_bdevs_list": [ 00:16:19.292 { 00:16:19.292 "name": "BaseBdev1", 00:16:19.292 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:19.292 "is_configured": true, 00:16:19.292 "data_offset": 0, 00:16:19.292 "data_size": 65536 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "name": "BaseBdev2", 00:16:19.292 "uuid": "9aefbf45-543f-43f0-a71c-430871aa554b", 00:16:19.292 "is_configured": true, 00:16:19.292 "data_offset": 0, 00:16:19.292 "data_size": 65536 00:16:19.292 }, 00:16:19.292 { 00:16:19.292 "name": "BaseBdev3", 00:16:19.292 "uuid": "b9923d88-019e-4688-88d0-9b9b3ebbc5de", 00:16:19.292 "is_configured": true, 00:16:19.292 "data_offset": 0, 00:16:19.292 "data_size": 65536 00:16:19.292 } 00:16:19.292 ] 00:16:19.292 } 00:16:19.292 } 00:16:19.292 }' 00:16:19.292 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.292 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:19.292 BaseBdev2 00:16:19.292 BaseBdev3' 00:16:19.292 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:19.292 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:19.292 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:19.557 "name": "BaseBdev1", 00:16:19.557 "aliases": [ 00:16:19.557 "76875cfc-0674-42a4-949f-b36b8fdf08cb" 00:16:19.557 ], 00:16:19.557 "product_name": "Malloc disk", 00:16:19.557 "block_size": 512, 00:16:19.557 "num_blocks": 65536, 00:16:19.557 "uuid": "76875cfc-0674-42a4-949f-b36b8fdf08cb", 00:16:19.557 "assigned_rate_limits": { 00:16:19.557 "rw_ios_per_sec": 0, 00:16:19.557 "rw_mbytes_per_sec": 0, 00:16:19.557 "r_mbytes_per_sec": 0, 00:16:19.557 "w_mbytes_per_sec": 0 00:16:19.557 }, 00:16:19.557 "claimed": true, 00:16:19.557 "claim_type": "exclusive_write", 00:16:19.557 "zoned": false, 00:16:19.557 "supported_io_types": { 00:16:19.557 "read": true, 00:16:19.557 "write": true, 00:16:19.557 "unmap": true, 00:16:19.557 "write_zeroes": true, 00:16:19.557 "flush": true, 00:16:19.557 "reset": true, 00:16:19.557 "compare": false, 00:16:19.557 "compare_and_write": false, 00:16:19.557 "abort": true, 00:16:19.557 "nvme_admin": false, 00:16:19.557 "nvme_io": false 00:16:19.557 }, 00:16:19.557 "memory_domains": [ 00:16:19.557 { 00:16:19.557 "dma_device_id": "system", 00:16:19.557 "dma_device_type": 1 00:16:19.557 }, 00:16:19.557 { 00:16:19.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.557 "dma_device_type": 2 00:16:19.557 } 00:16:19.557 ], 00:16:19.557 "driver_specific": {} 00:16:19.557 }' 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.557 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:19.869 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:20.139 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:20.139 "name": "BaseBdev2", 00:16:20.139 "aliases": [ 00:16:20.139 "9aefbf45-543f-43f0-a71c-430871aa554b" 00:16:20.139 ], 00:16:20.139 "product_name": "Malloc disk", 00:16:20.139 "block_size": 512, 00:16:20.139 "num_blocks": 65536, 00:16:20.139 "uuid": "9aefbf45-543f-43f0-a71c-430871aa554b", 00:16:20.140 "assigned_rate_limits": { 00:16:20.140 "rw_ios_per_sec": 0, 00:16:20.140 "rw_mbytes_per_sec": 0, 00:16:20.140 "r_mbytes_per_sec": 0, 00:16:20.140 "w_mbytes_per_sec": 0 00:16:20.140 }, 00:16:20.140 "claimed": true, 00:16:20.140 "claim_type": "exclusive_write", 00:16:20.140 "zoned": false, 00:16:20.140 "supported_io_types": { 00:16:20.140 "read": true, 00:16:20.140 "write": true, 00:16:20.140 "unmap": true, 00:16:20.140 "write_zeroes": true, 00:16:20.140 "flush": true, 00:16:20.140 "reset": true, 00:16:20.140 "compare": false, 00:16:20.140 "compare_and_write": false, 00:16:20.140 "abort": true, 00:16:20.140 "nvme_admin": false, 00:16:20.140 "nvme_io": false 00:16:20.140 }, 00:16:20.140 "memory_domains": [ 00:16:20.140 { 00:16:20.140 "dma_device_id": "system", 00:16:20.140 "dma_device_type": 1 00:16:20.140 }, 00:16:20.140 { 00:16:20.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.140 "dma_device_type": 2 00:16:20.140 } 00:16:20.140 ], 00:16:20.140 "driver_specific": {} 00:16:20.140 }' 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:20.140 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.398 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.398 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:20.398 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:20.398 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:20.398 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:20.657 "name": "BaseBdev3", 00:16:20.657 "aliases": [ 00:16:20.657 "b9923d88-019e-4688-88d0-9b9b3ebbc5de" 00:16:20.657 ], 00:16:20.657 "product_name": "Malloc disk", 00:16:20.657 "block_size": 512, 00:16:20.657 "num_blocks": 65536, 00:16:20.657 "uuid": "b9923d88-019e-4688-88d0-9b9b3ebbc5de", 00:16:20.657 "assigned_rate_limits": { 00:16:20.657 "rw_ios_per_sec": 0, 00:16:20.657 "rw_mbytes_per_sec": 0, 00:16:20.657 "r_mbytes_per_sec": 0, 00:16:20.657 "w_mbytes_per_sec": 0 00:16:20.657 }, 00:16:20.657 "claimed": true, 00:16:20.657 "claim_type": "exclusive_write", 00:16:20.657 "zoned": false, 00:16:20.657 "supported_io_types": { 00:16:20.657 "read": true, 00:16:20.657 "write": true, 00:16:20.657 "unmap": true, 00:16:20.657 "write_zeroes": true, 00:16:20.657 "flush": true, 00:16:20.657 "reset": true, 00:16:20.657 "compare": false, 00:16:20.657 "compare_and_write": false, 00:16:20.657 "abort": true, 00:16:20.657 "nvme_admin": false, 00:16:20.657 "nvme_io": false 00:16:20.657 }, 00:16:20.657 "memory_domains": [ 00:16:20.657 { 00:16:20.657 "dma_device_id": "system", 00:16:20.657 "dma_device_type": 1 00:16:20.657 }, 00:16:20.657 { 00:16:20.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.657 "dma_device_type": 2 00:16:20.657 } 00:16:20.657 ], 00:16:20.657 "driver_specific": {} 00:16:20.657 }' 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:20.657 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:20.916 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:20.916 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.916 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.916 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:20.916 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:21.176 [2024-06-10 19:00:35.722951] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.176 [2024-06-10 19:00:35.722973] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.176 [2024-06-10 19:00:35.723008] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.176 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:21.177 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.177 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.177 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.177 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.177 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.177 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.436 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.436 "name": "Existed_Raid", 00:16:21.436 "uuid": "314adff2-ad8f-470c-a0f6-9b8ab761b8a4", 00:16:21.436 "strip_size_kb": 64, 00:16:21.436 "state": "offline", 00:16:21.436 "raid_level": "concat", 00:16:21.436 "superblock": false, 00:16:21.436 "num_base_bdevs": 3, 00:16:21.436 "num_base_bdevs_discovered": 2, 00:16:21.436 "num_base_bdevs_operational": 2, 00:16:21.436 "base_bdevs_list": [ 00:16:21.436 { 00:16:21.436 "name": null, 00:16:21.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.436 "is_configured": false, 00:16:21.436 "data_offset": 0, 00:16:21.436 "data_size": 65536 00:16:21.436 }, 00:16:21.436 { 00:16:21.436 "name": "BaseBdev2", 00:16:21.436 "uuid": "9aefbf45-543f-43f0-a71c-430871aa554b", 00:16:21.436 "is_configured": true, 00:16:21.436 "data_offset": 0, 00:16:21.436 "data_size": 65536 00:16:21.436 }, 00:16:21.436 { 00:16:21.436 "name": "BaseBdev3", 00:16:21.436 "uuid": "b9923d88-019e-4688-88d0-9b9b3ebbc5de", 00:16:21.436 "is_configured": true, 00:16:21.436 "data_offset": 0, 00:16:21.436 "data_size": 65536 00:16:21.436 } 00:16:21.436 ] 00:16:21.436 }' 00:16:21.436 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.436 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.005 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:22.264 [2024-06-10 19:00:36.923185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.264 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:22.264 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:22.264 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.264 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:22.524 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:22.524 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.524 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:22.783 [2024-06-10 19:00:37.398229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.783 [2024-06-10 19:00:37.398266] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13986f0 name Existed_Raid, state offline 00:16:22.783 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:22.783 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:22.783 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.783 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:23.043 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:23.043 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:23.043 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:23.043 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:23.043 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:23.043 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.303 BaseBdev2 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:23.303 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.563 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.563 [ 00:16:23.563 { 00:16:23.563 "name": "BaseBdev2", 00:16:23.563 "aliases": [ 00:16:23.563 "6e81008a-f200-46dd-94b6-ca3be6f24a9b" 00:16:23.563 ], 00:16:23.563 "product_name": "Malloc disk", 00:16:23.563 "block_size": 512, 00:16:23.563 "num_blocks": 65536, 00:16:23.563 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:23.563 "assigned_rate_limits": { 00:16:23.563 "rw_ios_per_sec": 0, 00:16:23.563 "rw_mbytes_per_sec": 0, 00:16:23.563 "r_mbytes_per_sec": 0, 00:16:23.563 "w_mbytes_per_sec": 0 00:16:23.563 }, 00:16:23.563 "claimed": false, 00:16:23.563 "zoned": false, 00:16:23.563 "supported_io_types": { 00:16:23.563 "read": true, 00:16:23.563 "write": true, 00:16:23.563 "unmap": true, 00:16:23.563 "write_zeroes": true, 00:16:23.563 "flush": true, 00:16:23.563 "reset": true, 00:16:23.563 "compare": false, 00:16:23.563 "compare_and_write": false, 00:16:23.563 "abort": true, 00:16:23.563 "nvme_admin": false, 00:16:23.563 "nvme_io": false 00:16:23.563 }, 00:16:23.563 "memory_domains": [ 00:16:23.563 { 00:16:23.563 "dma_device_id": "system", 00:16:23.563 "dma_device_type": 1 00:16:23.563 }, 00:16:23.563 { 00:16:23.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.563 "dma_device_type": 2 00:16:23.563 } 00:16:23.563 ], 00:16:23.563 "driver_specific": {} 00:16:23.563 } 00:16:23.563 ] 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:23.823 BaseBdev3 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:23.823 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.083 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:24.342 [ 00:16:24.342 { 00:16:24.342 "name": "BaseBdev3", 00:16:24.342 "aliases": [ 00:16:24.342 "d55b811a-59b4-47d5-be7a-6148a9b1514d" 00:16:24.342 ], 00:16:24.342 "product_name": "Malloc disk", 00:16:24.342 "block_size": 512, 00:16:24.342 "num_blocks": 65536, 00:16:24.342 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:24.342 "assigned_rate_limits": { 00:16:24.342 "rw_ios_per_sec": 0, 00:16:24.342 "rw_mbytes_per_sec": 0, 00:16:24.342 "r_mbytes_per_sec": 0, 00:16:24.342 "w_mbytes_per_sec": 0 00:16:24.342 }, 00:16:24.342 "claimed": false, 00:16:24.342 "zoned": false, 00:16:24.342 "supported_io_types": { 00:16:24.342 "read": true, 00:16:24.342 "write": true, 00:16:24.342 "unmap": true, 00:16:24.342 "write_zeroes": true, 00:16:24.342 "flush": true, 00:16:24.342 "reset": true, 00:16:24.342 "compare": false, 00:16:24.342 "compare_and_write": false, 00:16:24.342 "abort": true, 00:16:24.342 "nvme_admin": false, 00:16:24.342 "nvme_io": false 00:16:24.342 }, 00:16:24.342 "memory_domains": [ 00:16:24.342 { 00:16:24.342 "dma_device_id": "system", 00:16:24.342 "dma_device_type": 1 00:16:24.342 }, 00:16:24.342 { 00:16:24.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.342 "dma_device_type": 2 00:16:24.342 } 00:16:24.342 ], 00:16:24.342 "driver_specific": {} 00:16:24.342 } 00:16:24.342 ] 00:16:24.342 19:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:24.342 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:24.342 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:24.342 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.601 [2024-06-10 19:00:39.217469] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.601 [2024-06-10 19:00:39.217506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.601 [2024-06-10 19:00:39.217524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.601 [2024-06-10 19:00:39.218758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.601 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.859 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.859 "name": "Existed_Raid", 00:16:24.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.859 "strip_size_kb": 64, 00:16:24.859 "state": "configuring", 00:16:24.859 "raid_level": "concat", 00:16:24.859 "superblock": false, 00:16:24.859 "num_base_bdevs": 3, 00:16:24.859 "num_base_bdevs_discovered": 2, 00:16:24.859 "num_base_bdevs_operational": 3, 00:16:24.859 "base_bdevs_list": [ 00:16:24.859 { 00:16:24.859 "name": "BaseBdev1", 00:16:24.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.859 "is_configured": false, 00:16:24.859 "data_offset": 0, 00:16:24.859 "data_size": 0 00:16:24.859 }, 00:16:24.859 { 00:16:24.859 "name": "BaseBdev2", 00:16:24.859 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:24.859 "is_configured": true, 00:16:24.859 "data_offset": 0, 00:16:24.859 "data_size": 65536 00:16:24.859 }, 00:16:24.859 { 00:16:24.859 "name": "BaseBdev3", 00:16:24.859 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:24.859 "is_configured": true, 00:16:24.859 "data_offset": 0, 00:16:24.859 "data_size": 65536 00:16:24.859 } 00:16:24.859 ] 00:16:24.859 }' 00:16:24.859 19:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.859 19:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.434 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:25.692 [2024-06-10 19:00:40.228113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:25.692 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:25.692 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.692 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.692 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:25.692 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.693 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.962 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.963 "name": "Existed_Raid", 00:16:25.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.963 "strip_size_kb": 64, 00:16:25.963 "state": "configuring", 00:16:25.963 "raid_level": "concat", 00:16:25.963 "superblock": false, 00:16:25.963 "num_base_bdevs": 3, 00:16:25.963 "num_base_bdevs_discovered": 1, 00:16:25.963 "num_base_bdevs_operational": 3, 00:16:25.963 "base_bdevs_list": [ 00:16:25.963 { 00:16:25.963 "name": "BaseBdev1", 00:16:25.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.963 "is_configured": false, 00:16:25.963 "data_offset": 0, 00:16:25.963 "data_size": 0 00:16:25.963 }, 00:16:25.963 { 00:16:25.963 "name": null, 00:16:25.963 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:25.963 "is_configured": false, 00:16:25.963 "data_offset": 0, 00:16:25.963 "data_size": 65536 00:16:25.963 }, 00:16:25.963 { 00:16:25.963 "name": "BaseBdev3", 00:16:25.963 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:25.963 "is_configured": true, 00:16:25.963 "data_offset": 0, 00:16:25.963 "data_size": 65536 00:16:25.963 } 00:16:25.963 ] 00:16:25.963 }' 00:16:25.963 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.963 19:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.535 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.535 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.535 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:26.535 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.794 [2024-06-10 19:00:41.486717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.794 BaseBdev1 00:16:26.794 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:26.794 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:26.794 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:26.794 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:26.794 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:26.794 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:26.795 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.053 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.318 [ 00:16:27.318 { 00:16:27.319 "name": "BaseBdev1", 00:16:27.319 "aliases": [ 00:16:27.319 "ca154bb0-ec7d-473e-a802-3e7f0ad58c92" 00:16:27.319 ], 00:16:27.319 "product_name": "Malloc disk", 00:16:27.319 "block_size": 512, 00:16:27.319 "num_blocks": 65536, 00:16:27.319 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:27.319 "assigned_rate_limits": { 00:16:27.319 "rw_ios_per_sec": 0, 00:16:27.319 "rw_mbytes_per_sec": 0, 00:16:27.319 "r_mbytes_per_sec": 0, 00:16:27.319 "w_mbytes_per_sec": 0 00:16:27.319 }, 00:16:27.319 "claimed": true, 00:16:27.319 "claim_type": "exclusive_write", 00:16:27.319 "zoned": false, 00:16:27.319 "supported_io_types": { 00:16:27.319 "read": true, 00:16:27.319 "write": true, 00:16:27.319 "unmap": true, 00:16:27.319 "write_zeroes": true, 00:16:27.319 "flush": true, 00:16:27.319 "reset": true, 00:16:27.319 "compare": false, 00:16:27.319 "compare_and_write": false, 00:16:27.319 "abort": true, 00:16:27.319 "nvme_admin": false, 00:16:27.319 "nvme_io": false 00:16:27.319 }, 00:16:27.319 "memory_domains": [ 00:16:27.319 { 00:16:27.319 "dma_device_id": "system", 00:16:27.319 "dma_device_type": 1 00:16:27.319 }, 00:16:27.319 { 00:16:27.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.319 "dma_device_type": 2 00:16:27.319 } 00:16:27.319 ], 00:16:27.319 "driver_specific": {} 00:16:27.319 } 00:16:27.319 ] 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.320 19:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.583 19:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.583 "name": "Existed_Raid", 00:16:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.583 "strip_size_kb": 64, 00:16:27.583 "state": "configuring", 00:16:27.583 "raid_level": "concat", 00:16:27.583 "superblock": false, 00:16:27.583 "num_base_bdevs": 3, 00:16:27.583 "num_base_bdevs_discovered": 2, 00:16:27.583 "num_base_bdevs_operational": 3, 00:16:27.583 "base_bdevs_list": [ 00:16:27.583 { 00:16:27.583 "name": "BaseBdev1", 00:16:27.583 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:27.583 "is_configured": true, 00:16:27.583 "data_offset": 0, 00:16:27.583 "data_size": 65536 00:16:27.583 }, 00:16:27.583 { 00:16:27.583 "name": null, 00:16:27.583 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:27.583 "is_configured": false, 00:16:27.583 "data_offset": 0, 00:16:27.583 "data_size": 65536 00:16:27.583 }, 00:16:27.583 { 00:16:27.583 "name": "BaseBdev3", 00:16:27.583 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:27.583 "is_configured": true, 00:16:27.583 "data_offset": 0, 00:16:27.583 "data_size": 65536 00:16:27.583 } 00:16:27.583 ] 00:16:27.583 }' 00:16:27.583 19:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.583 19:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.151 19:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.151 19:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:28.410 19:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:28.410 19:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:28.410 [2024-06-10 19:00:43.143123] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.410 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.669 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.669 "name": "Existed_Raid", 00:16:28.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.669 "strip_size_kb": 64, 00:16:28.669 "state": "configuring", 00:16:28.669 "raid_level": "concat", 00:16:28.669 "superblock": false, 00:16:28.669 "num_base_bdevs": 3, 00:16:28.669 "num_base_bdevs_discovered": 1, 00:16:28.669 "num_base_bdevs_operational": 3, 00:16:28.669 "base_bdevs_list": [ 00:16:28.669 { 00:16:28.669 "name": "BaseBdev1", 00:16:28.669 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:28.669 "is_configured": true, 00:16:28.669 "data_offset": 0, 00:16:28.669 "data_size": 65536 00:16:28.669 }, 00:16:28.669 { 00:16:28.669 "name": null, 00:16:28.669 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:28.669 "is_configured": false, 00:16:28.669 "data_offset": 0, 00:16:28.669 "data_size": 65536 00:16:28.669 }, 00:16:28.669 { 00:16:28.669 "name": null, 00:16:28.669 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:28.669 "is_configured": false, 00:16:28.669 "data_offset": 0, 00:16:28.669 "data_size": 65536 00:16:28.669 } 00:16:28.669 ] 00:16:28.669 }' 00:16:28.669 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.669 19:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.237 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.237 19:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:29.497 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:29.497 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:29.756 [2024-06-10 19:00:44.402454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.756 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.015 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.015 "name": "Existed_Raid", 00:16:30.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.015 "strip_size_kb": 64, 00:16:30.015 "state": "configuring", 00:16:30.015 "raid_level": "concat", 00:16:30.015 "superblock": false, 00:16:30.015 "num_base_bdevs": 3, 00:16:30.015 "num_base_bdevs_discovered": 2, 00:16:30.015 "num_base_bdevs_operational": 3, 00:16:30.016 "base_bdevs_list": [ 00:16:30.016 { 00:16:30.016 "name": "BaseBdev1", 00:16:30.016 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:30.016 "is_configured": true, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 }, 00:16:30.016 { 00:16:30.016 "name": null, 00:16:30.016 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:30.016 "is_configured": false, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 }, 00:16:30.016 { 00:16:30.016 "name": "BaseBdev3", 00:16:30.016 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:30.016 "is_configured": true, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 } 00:16:30.016 ] 00:16:30.016 }' 00:16:30.016 19:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.016 19:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.583 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.583 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.843 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:30.843 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:31.104 [2024-06-10 19:00:45.653782] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.104 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.364 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.364 "name": "Existed_Raid", 00:16:31.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.364 "strip_size_kb": 64, 00:16:31.364 "state": "configuring", 00:16:31.364 "raid_level": "concat", 00:16:31.364 "superblock": false, 00:16:31.364 "num_base_bdevs": 3, 00:16:31.364 "num_base_bdevs_discovered": 1, 00:16:31.364 "num_base_bdevs_operational": 3, 00:16:31.364 "base_bdevs_list": [ 00:16:31.364 { 00:16:31.364 "name": null, 00:16:31.364 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:31.364 "is_configured": false, 00:16:31.364 "data_offset": 0, 00:16:31.364 "data_size": 65536 00:16:31.364 }, 00:16:31.364 { 00:16:31.364 "name": null, 00:16:31.364 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:31.364 "is_configured": false, 00:16:31.364 "data_offset": 0, 00:16:31.364 "data_size": 65536 00:16:31.364 }, 00:16:31.364 { 00:16:31.364 "name": "BaseBdev3", 00:16:31.364 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:31.364 "is_configured": true, 00:16:31.364 "data_offset": 0, 00:16:31.364 "data_size": 65536 00:16:31.364 } 00:16:31.364 ] 00:16:31.364 }' 00:16:31.364 19:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.364 19:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.935 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.935 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.195 [2024-06-10 19:00:46.919253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.195 19:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.455 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.455 "name": "Existed_Raid", 00:16:32.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.455 "strip_size_kb": 64, 00:16:32.455 "state": "configuring", 00:16:32.455 "raid_level": "concat", 00:16:32.455 "superblock": false, 00:16:32.455 "num_base_bdevs": 3, 00:16:32.455 "num_base_bdevs_discovered": 2, 00:16:32.455 "num_base_bdevs_operational": 3, 00:16:32.455 "base_bdevs_list": [ 00:16:32.455 { 00:16:32.455 "name": null, 00:16:32.455 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:32.455 "is_configured": false, 00:16:32.455 "data_offset": 0, 00:16:32.455 "data_size": 65536 00:16:32.455 }, 00:16:32.455 { 00:16:32.455 "name": "BaseBdev2", 00:16:32.455 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:32.455 "is_configured": true, 00:16:32.455 "data_offset": 0, 00:16:32.455 "data_size": 65536 00:16:32.455 }, 00:16:32.455 { 00:16:32.455 "name": "BaseBdev3", 00:16:32.455 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:32.455 "is_configured": true, 00:16:32.455 "data_offset": 0, 00:16:32.455 "data_size": 65536 00:16:32.455 } 00:16:32.455 ] 00:16:32.455 }' 00:16:32.455 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.455 19:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.022 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.022 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:33.280 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:33.280 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.280 19:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:33.540 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ca154bb0-ec7d-473e-a802-3e7f0ad58c92 00:16:33.800 [2024-06-10 19:00:48.382358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:33.800 [2024-06-10 19:00:48.382389] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x138ee00 00:16:33.800 [2024-06-10 19:00:48.382396] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:33.800 [2024-06-10 19:00:48.382567] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x154bb40 00:16:33.800 [2024-06-10 19:00:48.382678] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x138ee00 00:16:33.800 [2024-06-10 19:00:48.382687] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x138ee00 00:16:33.800 [2024-06-10 19:00:48.382832] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.800 NewBaseBdev 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:33.800 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.059 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:34.319 [ 00:16:34.319 { 00:16:34.319 "name": "NewBaseBdev", 00:16:34.319 "aliases": [ 00:16:34.319 "ca154bb0-ec7d-473e-a802-3e7f0ad58c92" 00:16:34.319 ], 00:16:34.319 "product_name": "Malloc disk", 00:16:34.319 "block_size": 512, 00:16:34.319 "num_blocks": 65536, 00:16:34.319 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:34.319 "assigned_rate_limits": { 00:16:34.319 "rw_ios_per_sec": 0, 00:16:34.319 "rw_mbytes_per_sec": 0, 00:16:34.319 "r_mbytes_per_sec": 0, 00:16:34.319 "w_mbytes_per_sec": 0 00:16:34.319 }, 00:16:34.319 "claimed": true, 00:16:34.319 "claim_type": "exclusive_write", 00:16:34.319 "zoned": false, 00:16:34.319 "supported_io_types": { 00:16:34.319 "read": true, 00:16:34.319 "write": true, 00:16:34.319 "unmap": true, 00:16:34.319 "write_zeroes": true, 00:16:34.319 "flush": true, 00:16:34.319 "reset": true, 00:16:34.319 "compare": false, 00:16:34.319 "compare_and_write": false, 00:16:34.319 "abort": true, 00:16:34.319 "nvme_admin": false, 00:16:34.319 "nvme_io": false 00:16:34.319 }, 00:16:34.319 "memory_domains": [ 00:16:34.319 { 00:16:34.319 "dma_device_id": "system", 00:16:34.319 "dma_device_type": 1 00:16:34.319 }, 00:16:34.319 { 00:16:34.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.319 "dma_device_type": 2 00:16:34.319 } 00:16:34.319 ], 00:16:34.319 "driver_specific": {} 00:16:34.319 } 00:16:34.319 ] 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.319 19:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.578 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.578 "name": "Existed_Raid", 00:16:34.578 "uuid": "c1d82dcf-4366-46bb-86e2-9d593882122a", 00:16:34.578 "strip_size_kb": 64, 00:16:34.578 "state": "online", 00:16:34.578 "raid_level": "concat", 00:16:34.578 "superblock": false, 00:16:34.578 "num_base_bdevs": 3, 00:16:34.579 "num_base_bdevs_discovered": 3, 00:16:34.579 "num_base_bdevs_operational": 3, 00:16:34.579 "base_bdevs_list": [ 00:16:34.579 { 00:16:34.579 "name": "NewBaseBdev", 00:16:34.579 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:34.579 "is_configured": true, 00:16:34.579 "data_offset": 0, 00:16:34.579 "data_size": 65536 00:16:34.579 }, 00:16:34.579 { 00:16:34.579 "name": "BaseBdev2", 00:16:34.579 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:34.579 "is_configured": true, 00:16:34.579 "data_offset": 0, 00:16:34.579 "data_size": 65536 00:16:34.579 }, 00:16:34.579 { 00:16:34.579 "name": "BaseBdev3", 00:16:34.579 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:34.579 "is_configured": true, 00:16:34.579 "data_offset": 0, 00:16:34.579 "data_size": 65536 00:16:34.579 } 00:16:34.579 ] 00:16:34.579 }' 00:16:34.579 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.579 19:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:35.147 [2024-06-10 19:00:49.870588] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.147 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:35.147 "name": "Existed_Raid", 00:16:35.147 "aliases": [ 00:16:35.147 "c1d82dcf-4366-46bb-86e2-9d593882122a" 00:16:35.147 ], 00:16:35.147 "product_name": "Raid Volume", 00:16:35.147 "block_size": 512, 00:16:35.147 "num_blocks": 196608, 00:16:35.147 "uuid": "c1d82dcf-4366-46bb-86e2-9d593882122a", 00:16:35.147 "assigned_rate_limits": { 00:16:35.147 "rw_ios_per_sec": 0, 00:16:35.147 "rw_mbytes_per_sec": 0, 00:16:35.147 "r_mbytes_per_sec": 0, 00:16:35.147 "w_mbytes_per_sec": 0 00:16:35.147 }, 00:16:35.147 "claimed": false, 00:16:35.147 "zoned": false, 00:16:35.147 "supported_io_types": { 00:16:35.147 "read": true, 00:16:35.147 "write": true, 00:16:35.147 "unmap": true, 00:16:35.147 "write_zeroes": true, 00:16:35.147 "flush": true, 00:16:35.147 "reset": true, 00:16:35.147 "compare": false, 00:16:35.147 "compare_and_write": false, 00:16:35.147 "abort": false, 00:16:35.147 "nvme_admin": false, 00:16:35.147 "nvme_io": false 00:16:35.147 }, 00:16:35.147 "memory_domains": [ 00:16:35.147 { 00:16:35.147 "dma_device_id": "system", 00:16:35.147 "dma_device_type": 1 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.147 "dma_device_type": 2 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "dma_device_id": "system", 00:16:35.147 "dma_device_type": 1 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.147 "dma_device_type": 2 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "dma_device_id": "system", 00:16:35.147 "dma_device_type": 1 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.147 "dma_device_type": 2 00:16:35.147 } 00:16:35.147 ], 00:16:35.147 "driver_specific": { 00:16:35.147 "raid": { 00:16:35.147 "uuid": "c1d82dcf-4366-46bb-86e2-9d593882122a", 00:16:35.147 "strip_size_kb": 64, 00:16:35.147 "state": "online", 00:16:35.147 "raid_level": "concat", 00:16:35.147 "superblock": false, 00:16:35.147 "num_base_bdevs": 3, 00:16:35.147 "num_base_bdevs_discovered": 3, 00:16:35.147 "num_base_bdevs_operational": 3, 00:16:35.147 "base_bdevs_list": [ 00:16:35.147 { 00:16:35.147 "name": "NewBaseBdev", 00:16:35.147 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:35.147 "is_configured": true, 00:16:35.147 "data_offset": 0, 00:16:35.147 "data_size": 65536 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "name": "BaseBdev2", 00:16:35.147 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:35.147 "is_configured": true, 00:16:35.147 "data_offset": 0, 00:16:35.147 "data_size": 65536 00:16:35.147 }, 00:16:35.147 { 00:16:35.147 "name": "BaseBdev3", 00:16:35.147 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:35.148 "is_configured": true, 00:16:35.148 "data_offset": 0, 00:16:35.148 "data_size": 65536 00:16:35.148 } 00:16:35.148 ] 00:16:35.148 } 00:16:35.148 } 00:16:35.148 }' 00:16:35.148 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.405 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:35.405 BaseBdev2 00:16:35.405 BaseBdev3' 00:16:35.405 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:35.405 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:35.405 19:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:35.664 "name": "NewBaseBdev", 00:16:35.664 "aliases": [ 00:16:35.664 "ca154bb0-ec7d-473e-a802-3e7f0ad58c92" 00:16:35.664 ], 00:16:35.664 "product_name": "Malloc disk", 00:16:35.664 "block_size": 512, 00:16:35.664 "num_blocks": 65536, 00:16:35.664 "uuid": "ca154bb0-ec7d-473e-a802-3e7f0ad58c92", 00:16:35.664 "assigned_rate_limits": { 00:16:35.664 "rw_ios_per_sec": 0, 00:16:35.664 "rw_mbytes_per_sec": 0, 00:16:35.664 "r_mbytes_per_sec": 0, 00:16:35.664 "w_mbytes_per_sec": 0 00:16:35.664 }, 00:16:35.664 "claimed": true, 00:16:35.664 "claim_type": "exclusive_write", 00:16:35.664 "zoned": false, 00:16:35.664 "supported_io_types": { 00:16:35.664 "read": true, 00:16:35.664 "write": true, 00:16:35.664 "unmap": true, 00:16:35.664 "write_zeroes": true, 00:16:35.664 "flush": true, 00:16:35.664 "reset": true, 00:16:35.664 "compare": false, 00:16:35.664 "compare_and_write": false, 00:16:35.664 "abort": true, 00:16:35.664 "nvme_admin": false, 00:16:35.664 "nvme_io": false 00:16:35.664 }, 00:16:35.664 "memory_domains": [ 00:16:35.664 { 00:16:35.664 "dma_device_id": "system", 00:16:35.664 "dma_device_type": 1 00:16:35.664 }, 00:16:35.664 { 00:16:35.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.664 "dma_device_type": 2 00:16:35.664 } 00:16:35.664 ], 00:16:35.664 "driver_specific": {} 00:16:35.664 }' 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.664 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:35.940 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:35.941 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.941 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:35.941 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:35.941 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:35.941 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:35.941 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.203 "name": "BaseBdev2", 00:16:36.203 "aliases": [ 00:16:36.203 "6e81008a-f200-46dd-94b6-ca3be6f24a9b" 00:16:36.203 ], 00:16:36.203 "product_name": "Malloc disk", 00:16:36.203 "block_size": 512, 00:16:36.203 "num_blocks": 65536, 00:16:36.203 "uuid": "6e81008a-f200-46dd-94b6-ca3be6f24a9b", 00:16:36.203 "assigned_rate_limits": { 00:16:36.203 "rw_ios_per_sec": 0, 00:16:36.203 "rw_mbytes_per_sec": 0, 00:16:36.203 "r_mbytes_per_sec": 0, 00:16:36.203 "w_mbytes_per_sec": 0 00:16:36.203 }, 00:16:36.203 "claimed": true, 00:16:36.203 "claim_type": "exclusive_write", 00:16:36.203 "zoned": false, 00:16:36.203 "supported_io_types": { 00:16:36.203 "read": true, 00:16:36.203 "write": true, 00:16:36.203 "unmap": true, 00:16:36.203 "write_zeroes": true, 00:16:36.203 "flush": true, 00:16:36.203 "reset": true, 00:16:36.203 "compare": false, 00:16:36.203 "compare_and_write": false, 00:16:36.203 "abort": true, 00:16:36.203 "nvme_admin": false, 00:16:36.203 "nvme_io": false 00:16:36.203 }, 00:16:36.203 "memory_domains": [ 00:16:36.203 { 00:16:36.203 "dma_device_id": "system", 00:16:36.203 "dma_device_type": 1 00:16:36.203 }, 00:16:36.203 { 00:16:36.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.203 "dma_device_type": 2 00:16:36.203 } 00:16:36.203 ], 00:16:36.203 "driver_specific": {} 00:16:36.203 }' 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.203 19:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.463 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:36.722 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.722 "name": "BaseBdev3", 00:16:36.722 "aliases": [ 00:16:36.722 "d55b811a-59b4-47d5-be7a-6148a9b1514d" 00:16:36.722 ], 00:16:36.722 "product_name": "Malloc disk", 00:16:36.722 "block_size": 512, 00:16:36.722 "num_blocks": 65536, 00:16:36.722 "uuid": "d55b811a-59b4-47d5-be7a-6148a9b1514d", 00:16:36.722 "assigned_rate_limits": { 00:16:36.722 "rw_ios_per_sec": 0, 00:16:36.722 "rw_mbytes_per_sec": 0, 00:16:36.722 "r_mbytes_per_sec": 0, 00:16:36.722 "w_mbytes_per_sec": 0 00:16:36.722 }, 00:16:36.722 "claimed": true, 00:16:36.722 "claim_type": "exclusive_write", 00:16:36.722 "zoned": false, 00:16:36.722 "supported_io_types": { 00:16:36.722 "read": true, 00:16:36.722 "write": true, 00:16:36.722 "unmap": true, 00:16:36.722 "write_zeroes": true, 00:16:36.722 "flush": true, 00:16:36.722 "reset": true, 00:16:36.722 "compare": false, 00:16:36.722 "compare_and_write": false, 00:16:36.722 "abort": true, 00:16:36.722 "nvme_admin": false, 00:16:36.722 "nvme_io": false 00:16:36.722 }, 00:16:36.722 "memory_domains": [ 00:16:36.722 { 00:16:36.722 "dma_device_id": "system", 00:16:36.722 "dma_device_type": 1 00:16:36.722 }, 00:16:36.722 { 00:16:36.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.722 "dma_device_type": 2 00:16:36.722 } 00:16:36.722 ], 00:16:36.722 "driver_specific": {} 00:16:36.722 }' 00:16:36.722 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.722 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.722 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.722 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.722 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.981 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:37.239 [2024-06-10 19:00:51.847553] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.239 [2024-06-10 19:00:51.847580] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.239 [2024-06-10 19:00:51.847622] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.239 [2024-06-10 19:00:51.847666] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.239 [2024-06-10 19:00:51.847677] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x138ee00 name Existed_Raid, state offline 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1658692 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1658692 ']' 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1658692 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1658692 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1658692' 00:16:37.239 killing process with pid 1658692 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1658692 00:16:37.239 [2024-06-10 19:00:51.919647] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.239 19:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1658692 00:16:37.239 [2024-06-10 19:00:51.942814] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:37.501 00:16:37.501 real 0m26.652s 00:16:37.501 user 0m48.834s 00:16:37.501 sys 0m4.910s 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.501 ************************************ 00:16:37.501 END TEST raid_state_function_test 00:16:37.501 ************************************ 00:16:37.501 19:00:52 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:37.501 19:00:52 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:37.501 19:00:52 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:37.501 19:00:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.501 ************************************ 00:16:37.501 START TEST raid_state_function_test_sb 00:16:37.501 ************************************ 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 3 true 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1663833 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1663833' 00:16:37.501 Process raid pid: 1663833 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1663833 /var/tmp/spdk-raid.sock 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1663833 ']' 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:37.501 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.761 [2024-06-10 19:00:52.288764] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:16:37.761 [2024-06-10 19:00:52.288823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.0 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.1 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.2 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.3 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.4 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.5 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.6 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:01.7 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:02.0 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:02.1 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:02.2 cannot be used 00:16:37.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.761 EAL: Requested device 0000:b6:02.3 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b6:02.4 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b6:02.5 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b6:02.6 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b6:02.7 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.0 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.1 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.2 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.3 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.4 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.5 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.6 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:01.7 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.0 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.1 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.2 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.3 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.4 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.5 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.6 cannot be used 00:16:37.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:37.762 EAL: Requested device 0000:b8:02.7 cannot be used 00:16:37.762 [2024-06-10 19:00:52.424451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.762 [2024-06-10 19:00:52.511219] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.020 [2024-06-10 19:00:52.579052] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.020 [2024-06-10 19:00:52.579084] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.650 19:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:38.650 19:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:16:38.650 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:38.918 [2024-06-10 19:00:53.397392] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.918 [2024-06-10 19:00:53.397432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.918 [2024-06-10 19:00:53.397442] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.919 [2024-06-10 19:00:53.397453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.919 [2024-06-10 19:00:53.397461] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.919 [2024-06-10 19:00:53.397472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.919 "name": "Existed_Raid", 00:16:38.919 "uuid": "974c0f6c-07eb-4751-bf31-1abb84cf7954", 00:16:38.919 "strip_size_kb": 64, 00:16:38.919 "state": "configuring", 00:16:38.919 "raid_level": "concat", 00:16:38.919 "superblock": true, 00:16:38.919 "num_base_bdevs": 3, 00:16:38.919 "num_base_bdevs_discovered": 0, 00:16:38.919 "num_base_bdevs_operational": 3, 00:16:38.919 "base_bdevs_list": [ 00:16:38.919 { 00:16:38.919 "name": "BaseBdev1", 00:16:38.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.919 "is_configured": false, 00:16:38.919 "data_offset": 0, 00:16:38.919 "data_size": 0 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "name": "BaseBdev2", 00:16:38.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.919 "is_configured": false, 00:16:38.919 "data_offset": 0, 00:16:38.919 "data_size": 0 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "name": "BaseBdev3", 00:16:38.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.919 "is_configured": false, 00:16:38.919 "data_offset": 0, 00:16:38.919 "data_size": 0 00:16:38.919 } 00:16:38.919 ] 00:16:38.919 }' 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.919 19:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 19:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:39.743 [2024-06-10 19:00:54.440002] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.743 [2024-06-10 19:00:54.440029] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2485f30 name Existed_Raid, state configuring 00:16:39.743 19:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:40.011 [2024-06-10 19:00:54.668628] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.011 [2024-06-10 19:00:54.668650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.011 [2024-06-10 19:00:54.668658] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.011 [2024-06-10 19:00:54.668669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.011 [2024-06-10 19:00:54.668677] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.011 [2024-06-10 19:00:54.668687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.011 19:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.272 [2024-06-10 19:00:54.906601] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.272 BaseBdev1 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:40.272 19:00:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.531 19:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.791 [ 00:16:40.791 { 00:16:40.791 "name": "BaseBdev1", 00:16:40.791 "aliases": [ 00:16:40.791 "5171ddc1-5db6-4820-ba1b-fea2704e617e" 00:16:40.791 ], 00:16:40.791 "product_name": "Malloc disk", 00:16:40.791 "block_size": 512, 00:16:40.791 "num_blocks": 65536, 00:16:40.791 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:40.791 "assigned_rate_limits": { 00:16:40.791 "rw_ios_per_sec": 0, 00:16:40.791 "rw_mbytes_per_sec": 0, 00:16:40.791 "r_mbytes_per_sec": 0, 00:16:40.791 "w_mbytes_per_sec": 0 00:16:40.791 }, 00:16:40.791 "claimed": true, 00:16:40.791 "claim_type": "exclusive_write", 00:16:40.791 "zoned": false, 00:16:40.791 "supported_io_types": { 00:16:40.791 "read": true, 00:16:40.791 "write": true, 00:16:40.791 "unmap": true, 00:16:40.791 "write_zeroes": true, 00:16:40.791 "flush": true, 00:16:40.791 "reset": true, 00:16:40.791 "compare": false, 00:16:40.791 "compare_and_write": false, 00:16:40.791 "abort": true, 00:16:40.791 "nvme_admin": false, 00:16:40.791 "nvme_io": false 00:16:40.791 }, 00:16:40.791 "memory_domains": [ 00:16:40.791 { 00:16:40.791 "dma_device_id": "system", 00:16:40.791 "dma_device_type": 1 00:16:40.791 }, 00:16:40.791 { 00:16:40.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.791 "dma_device_type": 2 00:16:40.791 } 00:16:40.791 ], 00:16:40.791 "driver_specific": {} 00:16:40.791 } 00:16:40.791 ] 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.791 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.051 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.051 "name": "Existed_Raid", 00:16:41.051 "uuid": "f9bbca91-e5b0-42a5-8bf1-6f2247d226c8", 00:16:41.051 "strip_size_kb": 64, 00:16:41.051 "state": "configuring", 00:16:41.051 "raid_level": "concat", 00:16:41.051 "superblock": true, 00:16:41.051 "num_base_bdevs": 3, 00:16:41.051 "num_base_bdevs_discovered": 1, 00:16:41.051 "num_base_bdevs_operational": 3, 00:16:41.051 "base_bdevs_list": [ 00:16:41.051 { 00:16:41.051 "name": "BaseBdev1", 00:16:41.051 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:41.051 "is_configured": true, 00:16:41.051 "data_offset": 2048, 00:16:41.051 "data_size": 63488 00:16:41.051 }, 00:16:41.051 { 00:16:41.051 "name": "BaseBdev2", 00:16:41.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.051 "is_configured": false, 00:16:41.051 "data_offset": 0, 00:16:41.051 "data_size": 0 00:16:41.051 }, 00:16:41.051 { 00:16:41.051 "name": "BaseBdev3", 00:16:41.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.051 "is_configured": false, 00:16:41.051 "data_offset": 0, 00:16:41.051 "data_size": 0 00:16:41.051 } 00:16:41.051 ] 00:16:41.051 }' 00:16:41.051 19:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.051 19:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.617 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:41.876 [2024-06-10 19:00:56.382510] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.876 [2024-06-10 19:00:56.382543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2485800 name Existed_Raid, state configuring 00:16:41.876 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:41.876 [2024-06-10 19:00:56.611147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.876 [2024-06-10 19:00:56.612630] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.876 [2024-06-10 19:00:56.612658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.876 [2024-06-10 19:00:56.612667] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.876 [2024-06-10 19:00:56.612678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.876 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:41.876 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.135 "name": "Existed_Raid", 00:16:42.135 "uuid": "b5224a80-029a-4a3e-999d-41450ab691a3", 00:16:42.135 "strip_size_kb": 64, 00:16:42.135 "state": "configuring", 00:16:42.135 "raid_level": "concat", 00:16:42.135 "superblock": true, 00:16:42.135 "num_base_bdevs": 3, 00:16:42.135 "num_base_bdevs_discovered": 1, 00:16:42.135 "num_base_bdevs_operational": 3, 00:16:42.135 "base_bdevs_list": [ 00:16:42.135 { 00:16:42.135 "name": "BaseBdev1", 00:16:42.135 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:42.135 "is_configured": true, 00:16:42.135 "data_offset": 2048, 00:16:42.135 "data_size": 63488 00:16:42.135 }, 00:16:42.135 { 00:16:42.135 "name": "BaseBdev2", 00:16:42.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.135 "is_configured": false, 00:16:42.135 "data_offset": 0, 00:16:42.135 "data_size": 0 00:16:42.135 }, 00:16:42.135 { 00:16:42.135 "name": "BaseBdev3", 00:16:42.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.135 "is_configured": false, 00:16:42.135 "data_offset": 0, 00:16:42.135 "data_size": 0 00:16:42.135 } 00:16:42.135 ] 00:16:42.135 }' 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.135 19:00:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.707 19:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:42.966 [2024-06-10 19:00:57.649046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.966 BaseBdev2 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:42.966 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.225 19:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.485 [ 00:16:43.485 { 00:16:43.485 "name": "BaseBdev2", 00:16:43.485 "aliases": [ 00:16:43.485 "f7808240-021e-4414-b395-3767d7880a49" 00:16:43.485 ], 00:16:43.485 "product_name": "Malloc disk", 00:16:43.485 "block_size": 512, 00:16:43.485 "num_blocks": 65536, 00:16:43.485 "uuid": "f7808240-021e-4414-b395-3767d7880a49", 00:16:43.485 "assigned_rate_limits": { 00:16:43.485 "rw_ios_per_sec": 0, 00:16:43.485 "rw_mbytes_per_sec": 0, 00:16:43.485 "r_mbytes_per_sec": 0, 00:16:43.485 "w_mbytes_per_sec": 0 00:16:43.485 }, 00:16:43.485 "claimed": true, 00:16:43.485 "claim_type": "exclusive_write", 00:16:43.485 "zoned": false, 00:16:43.485 "supported_io_types": { 00:16:43.485 "read": true, 00:16:43.485 "write": true, 00:16:43.485 "unmap": true, 00:16:43.485 "write_zeroes": true, 00:16:43.485 "flush": true, 00:16:43.485 "reset": true, 00:16:43.485 "compare": false, 00:16:43.485 "compare_and_write": false, 00:16:43.485 "abort": true, 00:16:43.485 "nvme_admin": false, 00:16:43.485 "nvme_io": false 00:16:43.485 }, 00:16:43.485 "memory_domains": [ 00:16:43.485 { 00:16:43.485 "dma_device_id": "system", 00:16:43.485 "dma_device_type": 1 00:16:43.485 }, 00:16:43.485 { 00:16:43.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.485 "dma_device_type": 2 00:16:43.485 } 00:16:43.485 ], 00:16:43.485 "driver_specific": {} 00:16:43.485 } 00:16:43.485 ] 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.485 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.744 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.744 "name": "Existed_Raid", 00:16:43.744 "uuid": "b5224a80-029a-4a3e-999d-41450ab691a3", 00:16:43.744 "strip_size_kb": 64, 00:16:43.744 "state": "configuring", 00:16:43.744 "raid_level": "concat", 00:16:43.744 "superblock": true, 00:16:43.744 "num_base_bdevs": 3, 00:16:43.744 "num_base_bdevs_discovered": 2, 00:16:43.744 "num_base_bdevs_operational": 3, 00:16:43.744 "base_bdevs_list": [ 00:16:43.744 { 00:16:43.744 "name": "BaseBdev1", 00:16:43.744 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:43.744 "is_configured": true, 00:16:43.744 "data_offset": 2048, 00:16:43.744 "data_size": 63488 00:16:43.744 }, 00:16:43.744 { 00:16:43.744 "name": "BaseBdev2", 00:16:43.744 "uuid": "f7808240-021e-4414-b395-3767d7880a49", 00:16:43.744 "is_configured": true, 00:16:43.744 "data_offset": 2048, 00:16:43.744 "data_size": 63488 00:16:43.744 }, 00:16:43.744 { 00:16:43.744 "name": "BaseBdev3", 00:16:43.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.744 "is_configured": false, 00:16:43.744 "data_offset": 0, 00:16:43.744 "data_size": 0 00:16:43.744 } 00:16:43.744 ] 00:16:43.744 }' 00:16:43.744 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.744 19:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.311 19:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:44.570 [2024-06-10 19:00:59.128214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.570 [2024-06-10 19:00:59.128353] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24866f0 00:16:44.570 [2024-06-10 19:00:59.128369] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.570 [2024-06-10 19:00:59.128530] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24863c0 00:16:44.570 [2024-06-10 19:00:59.128642] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24866f0 00:16:44.570 [2024-06-10 19:00:59.128651] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x24866f0 00:16:44.570 [2024-06-10 19:00:59.128733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.570 BaseBdev3 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:44.570 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.829 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:45.088 [ 00:16:45.088 { 00:16:45.088 "name": "BaseBdev3", 00:16:45.088 "aliases": [ 00:16:45.088 "24497ae4-65f7-4422-a73e-41031f2ba0cd" 00:16:45.088 ], 00:16:45.088 "product_name": "Malloc disk", 00:16:45.088 "block_size": 512, 00:16:45.088 "num_blocks": 65536, 00:16:45.088 "uuid": "24497ae4-65f7-4422-a73e-41031f2ba0cd", 00:16:45.088 "assigned_rate_limits": { 00:16:45.088 "rw_ios_per_sec": 0, 00:16:45.088 "rw_mbytes_per_sec": 0, 00:16:45.088 "r_mbytes_per_sec": 0, 00:16:45.088 "w_mbytes_per_sec": 0 00:16:45.088 }, 00:16:45.088 "claimed": true, 00:16:45.088 "claim_type": "exclusive_write", 00:16:45.088 "zoned": false, 00:16:45.088 "supported_io_types": { 00:16:45.088 "read": true, 00:16:45.088 "write": true, 00:16:45.088 "unmap": true, 00:16:45.089 "write_zeroes": true, 00:16:45.089 "flush": true, 00:16:45.089 "reset": true, 00:16:45.089 "compare": false, 00:16:45.089 "compare_and_write": false, 00:16:45.089 "abort": true, 00:16:45.089 "nvme_admin": false, 00:16:45.089 "nvme_io": false 00:16:45.089 }, 00:16:45.089 "memory_domains": [ 00:16:45.089 { 00:16:45.089 "dma_device_id": "system", 00:16:45.089 "dma_device_type": 1 00:16:45.089 }, 00:16:45.089 { 00:16:45.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.089 "dma_device_type": 2 00:16:45.089 } 00:16:45.089 ], 00:16:45.089 "driver_specific": {} 00:16:45.089 } 00:16:45.089 ] 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.089 "name": "Existed_Raid", 00:16:45.089 "uuid": "b5224a80-029a-4a3e-999d-41450ab691a3", 00:16:45.089 "strip_size_kb": 64, 00:16:45.089 "state": "online", 00:16:45.089 "raid_level": "concat", 00:16:45.089 "superblock": true, 00:16:45.089 "num_base_bdevs": 3, 00:16:45.089 "num_base_bdevs_discovered": 3, 00:16:45.089 "num_base_bdevs_operational": 3, 00:16:45.089 "base_bdevs_list": [ 00:16:45.089 { 00:16:45.089 "name": "BaseBdev1", 00:16:45.089 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:45.089 "is_configured": true, 00:16:45.089 "data_offset": 2048, 00:16:45.089 "data_size": 63488 00:16:45.089 }, 00:16:45.089 { 00:16:45.089 "name": "BaseBdev2", 00:16:45.089 "uuid": "f7808240-021e-4414-b395-3767d7880a49", 00:16:45.089 "is_configured": true, 00:16:45.089 "data_offset": 2048, 00:16:45.089 "data_size": 63488 00:16:45.089 }, 00:16:45.089 { 00:16:45.089 "name": "BaseBdev3", 00:16:45.089 "uuid": "24497ae4-65f7-4422-a73e-41031f2ba0cd", 00:16:45.089 "is_configured": true, 00:16:45.089 "data_offset": 2048, 00:16:45.089 "data_size": 63488 00:16:45.089 } 00:16:45.089 ] 00:16:45.089 }' 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.089 19:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:45.657 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:45.914 [2024-06-10 19:01:00.608367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.914 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:45.914 "name": "Existed_Raid", 00:16:45.914 "aliases": [ 00:16:45.914 "b5224a80-029a-4a3e-999d-41450ab691a3" 00:16:45.914 ], 00:16:45.914 "product_name": "Raid Volume", 00:16:45.914 "block_size": 512, 00:16:45.914 "num_blocks": 190464, 00:16:45.914 "uuid": "b5224a80-029a-4a3e-999d-41450ab691a3", 00:16:45.914 "assigned_rate_limits": { 00:16:45.914 "rw_ios_per_sec": 0, 00:16:45.914 "rw_mbytes_per_sec": 0, 00:16:45.914 "r_mbytes_per_sec": 0, 00:16:45.914 "w_mbytes_per_sec": 0 00:16:45.914 }, 00:16:45.914 "claimed": false, 00:16:45.914 "zoned": false, 00:16:45.914 "supported_io_types": { 00:16:45.914 "read": true, 00:16:45.914 "write": true, 00:16:45.914 "unmap": true, 00:16:45.914 "write_zeroes": true, 00:16:45.914 "flush": true, 00:16:45.914 "reset": true, 00:16:45.914 "compare": false, 00:16:45.914 "compare_and_write": false, 00:16:45.914 "abort": false, 00:16:45.914 "nvme_admin": false, 00:16:45.914 "nvme_io": false 00:16:45.914 }, 00:16:45.914 "memory_domains": [ 00:16:45.914 { 00:16:45.914 "dma_device_id": "system", 00:16:45.914 "dma_device_type": 1 00:16:45.914 }, 00:16:45.914 { 00:16:45.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.914 "dma_device_type": 2 00:16:45.914 }, 00:16:45.914 { 00:16:45.914 "dma_device_id": "system", 00:16:45.914 "dma_device_type": 1 00:16:45.914 }, 00:16:45.914 { 00:16:45.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.914 "dma_device_type": 2 00:16:45.914 }, 00:16:45.914 { 00:16:45.914 "dma_device_id": "system", 00:16:45.914 "dma_device_type": 1 00:16:45.914 }, 00:16:45.914 { 00:16:45.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.914 "dma_device_type": 2 00:16:45.914 } 00:16:45.914 ], 00:16:45.914 "driver_specific": { 00:16:45.914 "raid": { 00:16:45.914 "uuid": "b5224a80-029a-4a3e-999d-41450ab691a3", 00:16:45.914 "strip_size_kb": 64, 00:16:45.914 "state": "online", 00:16:45.914 "raid_level": "concat", 00:16:45.914 "superblock": true, 00:16:45.914 "num_base_bdevs": 3, 00:16:45.914 "num_base_bdevs_discovered": 3, 00:16:45.914 "num_base_bdevs_operational": 3, 00:16:45.914 "base_bdevs_list": [ 00:16:45.914 { 00:16:45.914 "name": "BaseBdev1", 00:16:45.914 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:45.914 "is_configured": true, 00:16:45.914 "data_offset": 2048, 00:16:45.914 "data_size": 63488 00:16:45.914 }, 00:16:45.914 { 00:16:45.914 "name": "BaseBdev2", 00:16:45.914 "uuid": "f7808240-021e-4414-b395-3767d7880a49", 00:16:45.915 "is_configured": true, 00:16:45.915 "data_offset": 2048, 00:16:45.915 "data_size": 63488 00:16:45.915 }, 00:16:45.915 { 00:16:45.915 "name": "BaseBdev3", 00:16:45.915 "uuid": "24497ae4-65f7-4422-a73e-41031f2ba0cd", 00:16:45.915 "is_configured": true, 00:16:45.915 "data_offset": 2048, 00:16:45.915 "data_size": 63488 00:16:45.915 } 00:16:45.915 ] 00:16:45.915 } 00:16:45.915 } 00:16:45.915 }' 00:16:45.915 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.173 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:46.173 BaseBdev2 00:16:46.173 BaseBdev3' 00:16:46.173 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.173 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:46.173 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.173 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.173 "name": "BaseBdev1", 00:16:46.173 "aliases": [ 00:16:46.173 "5171ddc1-5db6-4820-ba1b-fea2704e617e" 00:16:46.173 ], 00:16:46.173 "product_name": "Malloc disk", 00:16:46.173 "block_size": 512, 00:16:46.173 "num_blocks": 65536, 00:16:46.173 "uuid": "5171ddc1-5db6-4820-ba1b-fea2704e617e", 00:16:46.173 "assigned_rate_limits": { 00:16:46.173 "rw_ios_per_sec": 0, 00:16:46.173 "rw_mbytes_per_sec": 0, 00:16:46.173 "r_mbytes_per_sec": 0, 00:16:46.173 "w_mbytes_per_sec": 0 00:16:46.173 }, 00:16:46.173 "claimed": true, 00:16:46.173 "claim_type": "exclusive_write", 00:16:46.173 "zoned": false, 00:16:46.173 "supported_io_types": { 00:16:46.173 "read": true, 00:16:46.173 "write": true, 00:16:46.173 "unmap": true, 00:16:46.173 "write_zeroes": true, 00:16:46.173 "flush": true, 00:16:46.173 "reset": true, 00:16:46.173 "compare": false, 00:16:46.173 "compare_and_write": false, 00:16:46.173 "abort": true, 00:16:46.173 "nvme_admin": false, 00:16:46.173 "nvme_io": false 00:16:46.173 }, 00:16:46.173 "memory_domains": [ 00:16:46.173 { 00:16:46.173 "dma_device_id": "system", 00:16:46.173 "dma_device_type": 1 00:16:46.173 }, 00:16:46.173 { 00:16:46.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.173 "dma_device_type": 2 00:16:46.173 } 00:16:46.173 ], 00:16:46.173 "driver_specific": {} 00:16:46.173 }' 00:16:46.173 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.432 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.432 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.432 19:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.432 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.692 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.692 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.692 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:46.692 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.951 "name": "BaseBdev2", 00:16:46.951 "aliases": [ 00:16:46.951 "f7808240-021e-4414-b395-3767d7880a49" 00:16:46.951 ], 00:16:46.951 "product_name": "Malloc disk", 00:16:46.951 "block_size": 512, 00:16:46.951 "num_blocks": 65536, 00:16:46.951 "uuid": "f7808240-021e-4414-b395-3767d7880a49", 00:16:46.951 "assigned_rate_limits": { 00:16:46.951 "rw_ios_per_sec": 0, 00:16:46.951 "rw_mbytes_per_sec": 0, 00:16:46.951 "r_mbytes_per_sec": 0, 00:16:46.951 "w_mbytes_per_sec": 0 00:16:46.951 }, 00:16:46.951 "claimed": true, 00:16:46.951 "claim_type": "exclusive_write", 00:16:46.951 "zoned": false, 00:16:46.951 "supported_io_types": { 00:16:46.951 "read": true, 00:16:46.951 "write": true, 00:16:46.951 "unmap": true, 00:16:46.951 "write_zeroes": true, 00:16:46.951 "flush": true, 00:16:46.951 "reset": true, 00:16:46.951 "compare": false, 00:16:46.951 "compare_and_write": false, 00:16:46.951 "abort": true, 00:16:46.951 "nvme_admin": false, 00:16:46.951 "nvme_io": false 00:16:46.951 }, 00:16:46.951 "memory_domains": [ 00:16:46.951 { 00:16:46.951 "dma_device_id": "system", 00:16:46.951 "dma_device_type": 1 00:16:46.951 }, 00:16:46.951 { 00:16:46.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.951 "dma_device_type": 2 00:16:46.951 } 00:16:46.951 ], 00:16:46.951 "driver_specific": {} 00:16:46.951 }' 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.951 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:47.209 19:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:47.469 "name": "BaseBdev3", 00:16:47.469 "aliases": [ 00:16:47.469 "24497ae4-65f7-4422-a73e-41031f2ba0cd" 00:16:47.469 ], 00:16:47.469 "product_name": "Malloc disk", 00:16:47.469 "block_size": 512, 00:16:47.469 "num_blocks": 65536, 00:16:47.469 "uuid": "24497ae4-65f7-4422-a73e-41031f2ba0cd", 00:16:47.469 "assigned_rate_limits": { 00:16:47.469 "rw_ios_per_sec": 0, 00:16:47.469 "rw_mbytes_per_sec": 0, 00:16:47.469 "r_mbytes_per_sec": 0, 00:16:47.469 "w_mbytes_per_sec": 0 00:16:47.469 }, 00:16:47.469 "claimed": true, 00:16:47.469 "claim_type": "exclusive_write", 00:16:47.469 "zoned": false, 00:16:47.469 "supported_io_types": { 00:16:47.469 "read": true, 00:16:47.469 "write": true, 00:16:47.469 "unmap": true, 00:16:47.469 "write_zeroes": true, 00:16:47.469 "flush": true, 00:16:47.469 "reset": true, 00:16:47.469 "compare": false, 00:16:47.469 "compare_and_write": false, 00:16:47.469 "abort": true, 00:16:47.469 "nvme_admin": false, 00:16:47.469 "nvme_io": false 00:16:47.469 }, 00:16:47.469 "memory_domains": [ 00:16:47.469 { 00:16:47.469 "dma_device_id": "system", 00:16:47.469 "dma_device_type": 1 00:16:47.469 }, 00:16:47.469 { 00:16:47.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.469 "dma_device_type": 2 00:16:47.469 } 00:16:47.469 ], 00:16:47.469 "driver_specific": {} 00:16:47.469 }' 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:47.469 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.728 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.728 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:47.728 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.728 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.728 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:47.728 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.989 [2024-06-10 19:01:02.605445] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.989 [2024-06-10 19:01:02.605469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.989 [2024-06-10 19:01:02.605504] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.989 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.248 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.248 "name": "Existed_Raid", 00:16:48.248 "uuid": "b5224a80-029a-4a3e-999d-41450ab691a3", 00:16:48.248 "strip_size_kb": 64, 00:16:48.248 "state": "offline", 00:16:48.248 "raid_level": "concat", 00:16:48.248 "superblock": true, 00:16:48.248 "num_base_bdevs": 3, 00:16:48.248 "num_base_bdevs_discovered": 2, 00:16:48.248 "num_base_bdevs_operational": 2, 00:16:48.248 "base_bdevs_list": [ 00:16:48.248 { 00:16:48.248 "name": null, 00:16:48.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.248 "is_configured": false, 00:16:48.248 "data_offset": 2048, 00:16:48.248 "data_size": 63488 00:16:48.248 }, 00:16:48.248 { 00:16:48.248 "name": "BaseBdev2", 00:16:48.248 "uuid": "f7808240-021e-4414-b395-3767d7880a49", 00:16:48.248 "is_configured": true, 00:16:48.248 "data_offset": 2048, 00:16:48.248 "data_size": 63488 00:16:48.248 }, 00:16:48.248 { 00:16:48.248 "name": "BaseBdev3", 00:16:48.248 "uuid": "24497ae4-65f7-4422-a73e-41031f2ba0cd", 00:16:48.248 "is_configured": true, 00:16:48.248 "data_offset": 2048, 00:16:48.248 "data_size": 63488 00:16:48.248 } 00:16:48.248 ] 00:16:48.248 }' 00:16:48.248 19:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.248 19:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.816 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:48.816 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:48.816 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.816 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:49.075 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:49.075 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.075 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:49.334 [2024-06-10 19:01:03.853748] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.334 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:49.334 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:49.334 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.334 19:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:49.593 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:49.593 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.593 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:49.593 [2024-06-10 19:01:04.313148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:49.593 [2024-06-10 19:01:04.313183] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24866f0 name Existed_Raid, state offline 00:16:49.593 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:49.593 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:49.593 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.594 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.852 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:49.853 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:49.853 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:49.853 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:49.853 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:49.853 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.111 BaseBdev2 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:50.111 19:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.370 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.630 [ 00:16:50.630 { 00:16:50.630 "name": "BaseBdev2", 00:16:50.630 "aliases": [ 00:16:50.630 "408e07a1-1148-4cf7-891b-2ea3d12ac1b8" 00:16:50.630 ], 00:16:50.630 "product_name": "Malloc disk", 00:16:50.630 "block_size": 512, 00:16:50.630 "num_blocks": 65536, 00:16:50.630 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:50.630 "assigned_rate_limits": { 00:16:50.630 "rw_ios_per_sec": 0, 00:16:50.630 "rw_mbytes_per_sec": 0, 00:16:50.630 "r_mbytes_per_sec": 0, 00:16:50.630 "w_mbytes_per_sec": 0 00:16:50.630 }, 00:16:50.630 "claimed": false, 00:16:50.630 "zoned": false, 00:16:50.630 "supported_io_types": { 00:16:50.630 "read": true, 00:16:50.630 "write": true, 00:16:50.630 "unmap": true, 00:16:50.630 "write_zeroes": true, 00:16:50.630 "flush": true, 00:16:50.630 "reset": true, 00:16:50.630 "compare": false, 00:16:50.630 "compare_and_write": false, 00:16:50.630 "abort": true, 00:16:50.630 "nvme_admin": false, 00:16:50.630 "nvme_io": false 00:16:50.630 }, 00:16:50.630 "memory_domains": [ 00:16:50.630 { 00:16:50.630 "dma_device_id": "system", 00:16:50.630 "dma_device_type": 1 00:16:50.630 }, 00:16:50.630 { 00:16:50.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.630 "dma_device_type": 2 00:16:50.630 } 00:16:50.630 ], 00:16:50.630 "driver_specific": {} 00:16:50.630 } 00:16:50.630 ] 00:16:50.630 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:50.630 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:50.630 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:50.630 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.889 BaseBdev3 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:50.889 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.146 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.406 [ 00:16:51.406 { 00:16:51.406 "name": "BaseBdev3", 00:16:51.406 "aliases": [ 00:16:51.406 "4da96cf7-523f-44aa-adc6-1a7b031ef606" 00:16:51.406 ], 00:16:51.406 "product_name": "Malloc disk", 00:16:51.406 "block_size": 512, 00:16:51.406 "num_blocks": 65536, 00:16:51.406 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:51.406 "assigned_rate_limits": { 00:16:51.406 "rw_ios_per_sec": 0, 00:16:51.406 "rw_mbytes_per_sec": 0, 00:16:51.406 "r_mbytes_per_sec": 0, 00:16:51.406 "w_mbytes_per_sec": 0 00:16:51.406 }, 00:16:51.406 "claimed": false, 00:16:51.406 "zoned": false, 00:16:51.406 "supported_io_types": { 00:16:51.406 "read": true, 00:16:51.406 "write": true, 00:16:51.406 "unmap": true, 00:16:51.406 "write_zeroes": true, 00:16:51.406 "flush": true, 00:16:51.406 "reset": true, 00:16:51.406 "compare": false, 00:16:51.406 "compare_and_write": false, 00:16:51.406 "abort": true, 00:16:51.406 "nvme_admin": false, 00:16:51.406 "nvme_io": false 00:16:51.406 }, 00:16:51.406 "memory_domains": [ 00:16:51.406 { 00:16:51.406 "dma_device_id": "system", 00:16:51.406 "dma_device_type": 1 00:16:51.406 }, 00:16:51.406 { 00:16:51.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.406 "dma_device_type": 2 00:16:51.406 } 00:16:51.406 ], 00:16:51.406 "driver_specific": {} 00:16:51.406 } 00:16:51.406 ] 00:16:51.406 19:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:51.406 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:51.406 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:51.406 19:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.406 [2024-06-10 19:01:06.124464] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.406 [2024-06-10 19:01:06.124502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.406 [2024-06-10 19:01:06.124520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.406 [2024-06-10 19:01:06.125774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.406 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.665 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.665 "name": "Existed_Raid", 00:16:51.665 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:51.665 "strip_size_kb": 64, 00:16:51.665 "state": "configuring", 00:16:51.665 "raid_level": "concat", 00:16:51.665 "superblock": true, 00:16:51.665 "num_base_bdevs": 3, 00:16:51.665 "num_base_bdevs_discovered": 2, 00:16:51.665 "num_base_bdevs_operational": 3, 00:16:51.665 "base_bdevs_list": [ 00:16:51.665 { 00:16:51.665 "name": "BaseBdev1", 00:16:51.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.665 "is_configured": false, 00:16:51.665 "data_offset": 0, 00:16:51.665 "data_size": 0 00:16:51.665 }, 00:16:51.665 { 00:16:51.665 "name": "BaseBdev2", 00:16:51.665 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:51.665 "is_configured": true, 00:16:51.665 "data_offset": 2048, 00:16:51.665 "data_size": 63488 00:16:51.665 }, 00:16:51.665 { 00:16:51.665 "name": "BaseBdev3", 00:16:51.665 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:51.665 "is_configured": true, 00:16:51.665 "data_offset": 2048, 00:16:51.665 "data_size": 63488 00:16:51.665 } 00:16:51.665 ] 00:16:51.665 }' 00:16:51.665 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.665 19:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.234 19:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:52.493 [2024-06-10 19:01:07.143122] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.493 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:52.493 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.493 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.494 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.753 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.753 "name": "Existed_Raid", 00:16:52.753 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:52.753 "strip_size_kb": 64, 00:16:52.753 "state": "configuring", 00:16:52.753 "raid_level": "concat", 00:16:52.753 "superblock": true, 00:16:52.753 "num_base_bdevs": 3, 00:16:52.753 "num_base_bdevs_discovered": 1, 00:16:52.753 "num_base_bdevs_operational": 3, 00:16:52.753 "base_bdevs_list": [ 00:16:52.753 { 00:16:52.753 "name": "BaseBdev1", 00:16:52.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.753 "is_configured": false, 00:16:52.753 "data_offset": 0, 00:16:52.753 "data_size": 0 00:16:52.753 }, 00:16:52.753 { 00:16:52.753 "name": null, 00:16:52.753 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:52.753 "is_configured": false, 00:16:52.753 "data_offset": 2048, 00:16:52.753 "data_size": 63488 00:16:52.753 }, 00:16:52.753 { 00:16:52.753 "name": "BaseBdev3", 00:16:52.753 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:52.753 "is_configured": true, 00:16:52.753 "data_offset": 2048, 00:16:52.753 "data_size": 63488 00:16:52.753 } 00:16:52.753 ] 00:16:52.753 }' 00:16:52.753 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.753 19:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.321 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.321 19:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.580 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:53.580 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.839 [2024-06-10 19:01:08.413700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.839 BaseBdev1 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:53.839 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.098 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.357 [ 00:16:54.357 { 00:16:54.357 "name": "BaseBdev1", 00:16:54.357 "aliases": [ 00:16:54.357 "8cccead9-bfb6-4f88-8110-0383fa2b51c8" 00:16:54.357 ], 00:16:54.357 "product_name": "Malloc disk", 00:16:54.357 "block_size": 512, 00:16:54.357 "num_blocks": 65536, 00:16:54.357 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:16:54.357 "assigned_rate_limits": { 00:16:54.357 "rw_ios_per_sec": 0, 00:16:54.357 "rw_mbytes_per_sec": 0, 00:16:54.357 "r_mbytes_per_sec": 0, 00:16:54.357 "w_mbytes_per_sec": 0 00:16:54.357 }, 00:16:54.357 "claimed": true, 00:16:54.357 "claim_type": "exclusive_write", 00:16:54.357 "zoned": false, 00:16:54.357 "supported_io_types": { 00:16:54.357 "read": true, 00:16:54.357 "write": true, 00:16:54.357 "unmap": true, 00:16:54.357 "write_zeroes": true, 00:16:54.357 "flush": true, 00:16:54.357 "reset": true, 00:16:54.357 "compare": false, 00:16:54.357 "compare_and_write": false, 00:16:54.357 "abort": true, 00:16:54.357 "nvme_admin": false, 00:16:54.357 "nvme_io": false 00:16:54.357 }, 00:16:54.357 "memory_domains": [ 00:16:54.357 { 00:16:54.357 "dma_device_id": "system", 00:16:54.357 "dma_device_type": 1 00:16:54.357 }, 00:16:54.357 { 00:16:54.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.357 "dma_device_type": 2 00:16:54.357 } 00:16:54.357 ], 00:16:54.357 "driver_specific": {} 00:16:54.357 } 00:16:54.357 ] 00:16:54.357 19:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:54.357 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.358 19:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.358 19:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.358 "name": "Existed_Raid", 00:16:54.358 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:54.358 "strip_size_kb": 64, 00:16:54.358 "state": "configuring", 00:16:54.358 "raid_level": "concat", 00:16:54.358 "superblock": true, 00:16:54.358 "num_base_bdevs": 3, 00:16:54.358 "num_base_bdevs_discovered": 2, 00:16:54.358 "num_base_bdevs_operational": 3, 00:16:54.358 "base_bdevs_list": [ 00:16:54.358 { 00:16:54.358 "name": "BaseBdev1", 00:16:54.358 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:16:54.358 "is_configured": true, 00:16:54.358 "data_offset": 2048, 00:16:54.358 "data_size": 63488 00:16:54.358 }, 00:16:54.358 { 00:16:54.358 "name": null, 00:16:54.358 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:54.358 "is_configured": false, 00:16:54.358 "data_offset": 2048, 00:16:54.358 "data_size": 63488 00:16:54.358 }, 00:16:54.358 { 00:16:54.358 "name": "BaseBdev3", 00:16:54.358 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:54.358 "is_configured": true, 00:16:54.358 "data_offset": 2048, 00:16:54.358 "data_size": 63488 00:16:54.358 } 00:16:54.358 ] 00:16:54.358 }' 00:16:54.358 19:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.358 19:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.924 19:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.924 19:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.184 19:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:55.184 19:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:55.443 [2024-06-10 19:01:10.102192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.443 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.702 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.703 "name": "Existed_Raid", 00:16:55.703 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:55.703 "strip_size_kb": 64, 00:16:55.703 "state": "configuring", 00:16:55.703 "raid_level": "concat", 00:16:55.703 "superblock": true, 00:16:55.703 "num_base_bdevs": 3, 00:16:55.703 "num_base_bdevs_discovered": 1, 00:16:55.703 "num_base_bdevs_operational": 3, 00:16:55.703 "base_bdevs_list": [ 00:16:55.703 { 00:16:55.703 "name": "BaseBdev1", 00:16:55.703 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:16:55.703 "is_configured": true, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 }, 00:16:55.703 { 00:16:55.703 "name": null, 00:16:55.703 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:55.703 "is_configured": false, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 }, 00:16:55.703 { 00:16:55.703 "name": null, 00:16:55.703 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:55.703 "is_configured": false, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 } 00:16:55.703 ] 00:16:55.703 }' 00:16:55.703 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.703 19:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.268 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.268 19:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.526 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:56.526 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:56.785 [2024-06-10 19:01:11.349496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.785 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.044 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.044 "name": "Existed_Raid", 00:16:57.044 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:57.044 "strip_size_kb": 64, 00:16:57.044 "state": "configuring", 00:16:57.044 "raid_level": "concat", 00:16:57.044 "superblock": true, 00:16:57.044 "num_base_bdevs": 3, 00:16:57.044 "num_base_bdevs_discovered": 2, 00:16:57.044 "num_base_bdevs_operational": 3, 00:16:57.044 "base_bdevs_list": [ 00:16:57.044 { 00:16:57.044 "name": "BaseBdev1", 00:16:57.044 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:16:57.044 "is_configured": true, 00:16:57.044 "data_offset": 2048, 00:16:57.044 "data_size": 63488 00:16:57.044 }, 00:16:57.044 { 00:16:57.044 "name": null, 00:16:57.044 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:57.044 "is_configured": false, 00:16:57.044 "data_offset": 2048, 00:16:57.044 "data_size": 63488 00:16:57.044 }, 00:16:57.044 { 00:16:57.044 "name": "BaseBdev3", 00:16:57.044 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:57.044 "is_configured": true, 00:16:57.044 "data_offset": 2048, 00:16:57.044 "data_size": 63488 00:16:57.044 } 00:16:57.044 ] 00:16:57.044 }' 00:16:57.044 19:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.044 19:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.683 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.683 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:57.683 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:57.683 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:57.943 [2024-06-10 19:01:12.552683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.943 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.202 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.202 "name": "Existed_Raid", 00:16:58.202 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:58.202 "strip_size_kb": 64, 00:16:58.202 "state": "configuring", 00:16:58.202 "raid_level": "concat", 00:16:58.202 "superblock": true, 00:16:58.202 "num_base_bdevs": 3, 00:16:58.202 "num_base_bdevs_discovered": 1, 00:16:58.202 "num_base_bdevs_operational": 3, 00:16:58.202 "base_bdevs_list": [ 00:16:58.202 { 00:16:58.202 "name": null, 00:16:58.202 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:16:58.202 "is_configured": false, 00:16:58.202 "data_offset": 2048, 00:16:58.202 "data_size": 63488 00:16:58.202 }, 00:16:58.202 { 00:16:58.202 "name": null, 00:16:58.202 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:58.202 "is_configured": false, 00:16:58.202 "data_offset": 2048, 00:16:58.202 "data_size": 63488 00:16:58.202 }, 00:16:58.202 { 00:16:58.202 "name": "BaseBdev3", 00:16:58.202 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:58.202 "is_configured": true, 00:16:58.202 "data_offset": 2048, 00:16:58.202 "data_size": 63488 00:16:58.202 } 00:16:58.202 ] 00:16:58.202 }' 00:16:58.202 19:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.202 19:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.770 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.770 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:59.028 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:59.028 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:59.287 [2024-06-10 19:01:13.794107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.287 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:59.287 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.288 19:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.288 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.288 "name": "Existed_Raid", 00:16:59.288 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:16:59.288 "strip_size_kb": 64, 00:16:59.288 "state": "configuring", 00:16:59.288 "raid_level": "concat", 00:16:59.288 "superblock": true, 00:16:59.288 "num_base_bdevs": 3, 00:16:59.288 "num_base_bdevs_discovered": 2, 00:16:59.288 "num_base_bdevs_operational": 3, 00:16:59.288 "base_bdevs_list": [ 00:16:59.288 { 00:16:59.288 "name": null, 00:16:59.288 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:16:59.288 "is_configured": false, 00:16:59.288 "data_offset": 2048, 00:16:59.288 "data_size": 63488 00:16:59.288 }, 00:16:59.288 { 00:16:59.288 "name": "BaseBdev2", 00:16:59.288 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:16:59.288 "is_configured": true, 00:16:59.288 "data_offset": 2048, 00:16:59.288 "data_size": 63488 00:16:59.288 }, 00:16:59.288 { 00:16:59.288 "name": "BaseBdev3", 00:16:59.288 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:16:59.288 "is_configured": true, 00:16:59.288 "data_offset": 2048, 00:16:59.288 "data_size": 63488 00:16:59.288 } 00:16:59.288 ] 00:16:59.288 }' 00:16:59.288 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.288 19:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.854 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:59.854 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.112 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:00.112 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.112 19:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:00.369 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8cccead9-bfb6-4f88-8110-0383fa2b51c8 00:17:00.627 [2024-06-10 19:01:15.257033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:00.627 [2024-06-10 19:01:15.257169] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x247d370 00:17:00.627 [2024-06-10 19:01:15.257181] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:00.627 [2024-06-10 19:01:15.257339] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2639b40 00:17:00.627 [2024-06-10 19:01:15.257437] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x247d370 00:17:00.627 [2024-06-10 19:01:15.257446] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x247d370 00:17:00.627 [2024-06-10 19:01:15.257524] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.627 NewBaseBdev 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:00.627 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.885 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:01.144 [ 00:17:01.144 { 00:17:01.144 "name": "NewBaseBdev", 00:17:01.144 "aliases": [ 00:17:01.144 "8cccead9-bfb6-4f88-8110-0383fa2b51c8" 00:17:01.144 ], 00:17:01.144 "product_name": "Malloc disk", 00:17:01.144 "block_size": 512, 00:17:01.144 "num_blocks": 65536, 00:17:01.144 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:17:01.144 "assigned_rate_limits": { 00:17:01.144 "rw_ios_per_sec": 0, 00:17:01.144 "rw_mbytes_per_sec": 0, 00:17:01.144 "r_mbytes_per_sec": 0, 00:17:01.144 "w_mbytes_per_sec": 0 00:17:01.144 }, 00:17:01.144 "claimed": true, 00:17:01.145 "claim_type": "exclusive_write", 00:17:01.145 "zoned": false, 00:17:01.145 "supported_io_types": { 00:17:01.145 "read": true, 00:17:01.145 "write": true, 00:17:01.145 "unmap": true, 00:17:01.145 "write_zeroes": true, 00:17:01.145 "flush": true, 00:17:01.145 "reset": true, 00:17:01.145 "compare": false, 00:17:01.145 "compare_and_write": false, 00:17:01.145 "abort": true, 00:17:01.145 "nvme_admin": false, 00:17:01.145 "nvme_io": false 00:17:01.145 }, 00:17:01.145 "memory_domains": [ 00:17:01.145 { 00:17:01.145 "dma_device_id": "system", 00:17:01.145 "dma_device_type": 1 00:17:01.145 }, 00:17:01.145 { 00:17:01.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.145 "dma_device_type": 2 00:17:01.145 } 00:17:01.145 ], 00:17:01.145 "driver_specific": {} 00:17:01.145 } 00:17:01.145 ] 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.145 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.404 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.404 "name": "Existed_Raid", 00:17:01.404 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:17:01.404 "strip_size_kb": 64, 00:17:01.404 "state": "online", 00:17:01.404 "raid_level": "concat", 00:17:01.404 "superblock": true, 00:17:01.404 "num_base_bdevs": 3, 00:17:01.404 "num_base_bdevs_discovered": 3, 00:17:01.404 "num_base_bdevs_operational": 3, 00:17:01.404 "base_bdevs_list": [ 00:17:01.404 { 00:17:01.404 "name": "NewBaseBdev", 00:17:01.404 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:17:01.404 "is_configured": true, 00:17:01.404 "data_offset": 2048, 00:17:01.404 "data_size": 63488 00:17:01.404 }, 00:17:01.404 { 00:17:01.404 "name": "BaseBdev2", 00:17:01.404 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:17:01.404 "is_configured": true, 00:17:01.404 "data_offset": 2048, 00:17:01.404 "data_size": 63488 00:17:01.404 }, 00:17:01.404 { 00:17:01.404 "name": "BaseBdev3", 00:17:01.404 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:17:01.404 "is_configured": true, 00:17:01.404 "data_offset": 2048, 00:17:01.404 "data_size": 63488 00:17:01.404 } 00:17:01.404 ] 00:17:01.404 }' 00:17:01.404 19:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.404 19:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:01.971 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:01.971 [2024-06-10 19:01:16.709113] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.972 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:01.972 "name": "Existed_Raid", 00:17:01.972 "aliases": [ 00:17:01.972 "584eabad-736d-4a6a-a249-58b646c986fd" 00:17:01.972 ], 00:17:01.972 "product_name": "Raid Volume", 00:17:01.972 "block_size": 512, 00:17:01.972 "num_blocks": 190464, 00:17:01.972 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:17:01.972 "assigned_rate_limits": { 00:17:01.972 "rw_ios_per_sec": 0, 00:17:01.972 "rw_mbytes_per_sec": 0, 00:17:01.972 "r_mbytes_per_sec": 0, 00:17:01.972 "w_mbytes_per_sec": 0 00:17:01.972 }, 00:17:01.972 "claimed": false, 00:17:01.972 "zoned": false, 00:17:01.972 "supported_io_types": { 00:17:01.972 "read": true, 00:17:01.972 "write": true, 00:17:01.972 "unmap": true, 00:17:01.972 "write_zeroes": true, 00:17:01.972 "flush": true, 00:17:01.972 "reset": true, 00:17:01.972 "compare": false, 00:17:01.972 "compare_and_write": false, 00:17:01.972 "abort": false, 00:17:01.972 "nvme_admin": false, 00:17:01.972 "nvme_io": false 00:17:01.972 }, 00:17:01.972 "memory_domains": [ 00:17:01.972 { 00:17:01.972 "dma_device_id": "system", 00:17:01.972 "dma_device_type": 1 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.972 "dma_device_type": 2 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "dma_device_id": "system", 00:17:01.972 "dma_device_type": 1 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.972 "dma_device_type": 2 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "dma_device_id": "system", 00:17:01.972 "dma_device_type": 1 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.972 "dma_device_type": 2 00:17:01.972 } 00:17:01.972 ], 00:17:01.972 "driver_specific": { 00:17:01.972 "raid": { 00:17:01.972 "uuid": "584eabad-736d-4a6a-a249-58b646c986fd", 00:17:01.972 "strip_size_kb": 64, 00:17:01.972 "state": "online", 00:17:01.972 "raid_level": "concat", 00:17:01.972 "superblock": true, 00:17:01.972 "num_base_bdevs": 3, 00:17:01.972 "num_base_bdevs_discovered": 3, 00:17:01.972 "num_base_bdevs_operational": 3, 00:17:01.972 "base_bdevs_list": [ 00:17:01.972 { 00:17:01.972 "name": "NewBaseBdev", 00:17:01.972 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:17:01.972 "is_configured": true, 00:17:01.972 "data_offset": 2048, 00:17:01.972 "data_size": 63488 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "name": "BaseBdev2", 00:17:01.972 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:17:01.972 "is_configured": true, 00:17:01.972 "data_offset": 2048, 00:17:01.972 "data_size": 63488 00:17:01.972 }, 00:17:01.972 { 00:17:01.972 "name": "BaseBdev3", 00:17:01.972 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:17:01.972 "is_configured": true, 00:17:01.972 "data_offset": 2048, 00:17:01.972 "data_size": 63488 00:17:01.972 } 00:17:01.972 ] 00:17:01.972 } 00:17:01.972 } 00:17:01.972 }' 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:02.231 BaseBdev2 00:17:02.231 BaseBdev3' 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.231 "name": "NewBaseBdev", 00:17:02.231 "aliases": [ 00:17:02.231 "8cccead9-bfb6-4f88-8110-0383fa2b51c8" 00:17:02.231 ], 00:17:02.231 "product_name": "Malloc disk", 00:17:02.231 "block_size": 512, 00:17:02.231 "num_blocks": 65536, 00:17:02.231 "uuid": "8cccead9-bfb6-4f88-8110-0383fa2b51c8", 00:17:02.231 "assigned_rate_limits": { 00:17:02.231 "rw_ios_per_sec": 0, 00:17:02.231 "rw_mbytes_per_sec": 0, 00:17:02.231 "r_mbytes_per_sec": 0, 00:17:02.231 "w_mbytes_per_sec": 0 00:17:02.231 }, 00:17:02.231 "claimed": true, 00:17:02.231 "claim_type": "exclusive_write", 00:17:02.231 "zoned": false, 00:17:02.231 "supported_io_types": { 00:17:02.231 "read": true, 00:17:02.231 "write": true, 00:17:02.231 "unmap": true, 00:17:02.231 "write_zeroes": true, 00:17:02.231 "flush": true, 00:17:02.231 "reset": true, 00:17:02.231 "compare": false, 00:17:02.231 "compare_and_write": false, 00:17:02.231 "abort": true, 00:17:02.231 "nvme_admin": false, 00:17:02.231 "nvme_io": false 00:17:02.231 }, 00:17:02.231 "memory_domains": [ 00:17:02.231 { 00:17:02.231 "dma_device_id": "system", 00:17:02.231 "dma_device_type": 1 00:17:02.231 }, 00:17:02.231 { 00:17:02.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.231 "dma_device_type": 2 00:17:02.231 } 00:17:02.231 ], 00:17:02.231 "driver_specific": {} 00:17:02.231 }' 00:17:02.231 19:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.490 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.749 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.749 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.749 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.749 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:02.749 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.008 "name": "BaseBdev2", 00:17:03.008 "aliases": [ 00:17:03.008 "408e07a1-1148-4cf7-891b-2ea3d12ac1b8" 00:17:03.008 ], 00:17:03.008 "product_name": "Malloc disk", 00:17:03.008 "block_size": 512, 00:17:03.008 "num_blocks": 65536, 00:17:03.008 "uuid": "408e07a1-1148-4cf7-891b-2ea3d12ac1b8", 00:17:03.008 "assigned_rate_limits": { 00:17:03.008 "rw_ios_per_sec": 0, 00:17:03.008 "rw_mbytes_per_sec": 0, 00:17:03.008 "r_mbytes_per_sec": 0, 00:17:03.008 "w_mbytes_per_sec": 0 00:17:03.008 }, 00:17:03.008 "claimed": true, 00:17:03.008 "claim_type": "exclusive_write", 00:17:03.008 "zoned": false, 00:17:03.008 "supported_io_types": { 00:17:03.008 "read": true, 00:17:03.008 "write": true, 00:17:03.008 "unmap": true, 00:17:03.008 "write_zeroes": true, 00:17:03.008 "flush": true, 00:17:03.008 "reset": true, 00:17:03.008 "compare": false, 00:17:03.008 "compare_and_write": false, 00:17:03.008 "abort": true, 00:17:03.008 "nvme_admin": false, 00:17:03.008 "nvme_io": false 00:17:03.008 }, 00:17:03.008 "memory_domains": [ 00:17:03.008 { 00:17:03.008 "dma_device_id": "system", 00:17:03.008 "dma_device_type": 1 00:17:03.008 }, 00:17:03.008 { 00:17:03.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.008 "dma_device_type": 2 00:17:03.008 } 00:17:03.008 ], 00:17:03.008 "driver_specific": {} 00:17:03.008 }' 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.008 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:03.267 19:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.525 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.525 "name": "BaseBdev3", 00:17:03.525 "aliases": [ 00:17:03.525 "4da96cf7-523f-44aa-adc6-1a7b031ef606" 00:17:03.525 ], 00:17:03.525 "product_name": "Malloc disk", 00:17:03.525 "block_size": 512, 00:17:03.525 "num_blocks": 65536, 00:17:03.525 "uuid": "4da96cf7-523f-44aa-adc6-1a7b031ef606", 00:17:03.525 "assigned_rate_limits": { 00:17:03.525 "rw_ios_per_sec": 0, 00:17:03.525 "rw_mbytes_per_sec": 0, 00:17:03.525 "r_mbytes_per_sec": 0, 00:17:03.525 "w_mbytes_per_sec": 0 00:17:03.525 }, 00:17:03.525 "claimed": true, 00:17:03.525 "claim_type": "exclusive_write", 00:17:03.525 "zoned": false, 00:17:03.525 "supported_io_types": { 00:17:03.525 "read": true, 00:17:03.525 "write": true, 00:17:03.525 "unmap": true, 00:17:03.525 "write_zeroes": true, 00:17:03.525 "flush": true, 00:17:03.525 "reset": true, 00:17:03.525 "compare": false, 00:17:03.525 "compare_and_write": false, 00:17:03.525 "abort": true, 00:17:03.525 "nvme_admin": false, 00:17:03.525 "nvme_io": false 00:17:03.525 }, 00:17:03.525 "memory_domains": [ 00:17:03.525 { 00:17:03.525 "dma_device_id": "system", 00:17:03.525 "dma_device_type": 1 00:17:03.525 }, 00:17:03.525 { 00:17:03.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.526 "dma_device_type": 2 00:17:03.526 } 00:17:03.526 ], 00:17:03.526 "driver_specific": {} 00:17:03.526 }' 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.526 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.784 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.784 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.784 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.784 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.784 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.784 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.043 [2024-06-10 19:01:18.621925] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.043 [2024-06-10 19:01:18.621946] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.043 [2024-06-10 19:01:18.621991] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.043 [2024-06-10 19:01:18.622035] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.043 [2024-06-10 19:01:18.622046] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x247d370 name Existed_Raid, state offline 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1663833 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1663833 ']' 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1663833 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1663833 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1663833' 00:17:04.043 killing process with pid 1663833 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1663833 00:17:04.043 [2024-06-10 19:01:18.691843] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.043 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1663833 00:17:04.043 [2024-06-10 19:01:18.715071] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.302 19:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:04.302 00:17:04.302 real 0m26.684s 00:17:04.302 user 0m48.895s 00:17:04.302 sys 0m4.878s 00:17:04.302 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:04.302 19:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.302 ************************************ 00:17:04.302 END TEST raid_state_function_test_sb 00:17:04.302 ************************************ 00:17:04.302 19:01:18 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:04.302 19:01:18 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:17:04.302 19:01:18 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:04.302 19:01:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.302 ************************************ 00:17:04.302 START TEST raid_superblock_test 00:17:04.302 ************************************ 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 3 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:04.302 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1668976 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1668976 /var/tmp/spdk-raid.sock 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1668976 ']' 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:04.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.303 19:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:04.303 [2024-06-10 19:01:19.041341] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:17:04.303 [2024-06-10 19:01:19.041396] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668976 ] 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.0 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.1 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.2 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.3 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.4 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.5 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.6 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:01.7 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.0 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.1 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.2 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.3 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.4 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.5 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.6 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b6:02.7 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.0 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.1 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.2 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.3 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.4 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.5 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.6 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:01.7 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.0 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.1 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.2 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.3 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.4 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.5 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.6 cannot be used 00:17:04.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:04.562 EAL: Requested device 0000:b8:02.7 cannot be used 00:17:04.562 [2024-06-10 19:01:19.174281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.562 [2024-06-10 19:01:19.259891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.562 [2024-06-10 19:01:19.317097] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.562 [2024-06-10 19:01:19.317132] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.499 19:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:05.499 malloc1 00:17:05.499 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.758 [2024-06-10 19:01:20.385317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.758 [2024-06-10 19:01:20.385359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.758 [2024-06-10 19:01:20.385377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164db70 00:17:05.758 [2024-06-10 19:01:20.385389] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.758 [2024-06-10 19:01:20.386891] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.758 [2024-06-10 19:01:20.386925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.758 pt1 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.758 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:06.017 malloc2 00:17:06.017 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.276 [2024-06-10 19:01:20.831048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.276 [2024-06-10 19:01:20.831087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.276 [2024-06-10 19:01:20.831103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164ef70 00:17:06.276 [2024-06-10 19:01:20.831114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.276 [2024-06-10 19:01:20.832537] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.276 [2024-06-10 19:01:20.832564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.276 pt2 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.276 19:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:06.535 malloc3 00:17:06.535 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:06.794 [2024-06-10 19:01:21.292489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:06.794 [2024-06-10 19:01:21.292529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.794 [2024-06-10 19:01:21.292544] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e5940 00:17:06.794 [2024-06-10 19:01:21.292556] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.794 [2024-06-10 19:01:21.293905] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.794 [2024-06-10 19:01:21.293932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:06.794 pt3 00:17:06.794 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:06.795 [2024-06-10 19:01:21.517098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.795 [2024-06-10 19:01:21.518246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.795 [2024-06-10 19:01:21.518294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:06.795 [2024-06-10 19:01:21.518428] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1646210 00:17:06.795 [2024-06-10 19:01:21.518438] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:06.795 [2024-06-10 19:01:21.518616] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x164d840 00:17:06.795 [2024-06-10 19:01:21.518743] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1646210 00:17:06.795 [2024-06-10 19:01:21.518752] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1646210 00:17:06.795 [2024-06-10 19:01:21.518837] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.795 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.054 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.054 "name": "raid_bdev1", 00:17:07.054 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:07.054 "strip_size_kb": 64, 00:17:07.054 "state": "online", 00:17:07.054 "raid_level": "concat", 00:17:07.054 "superblock": true, 00:17:07.054 "num_base_bdevs": 3, 00:17:07.054 "num_base_bdevs_discovered": 3, 00:17:07.054 "num_base_bdevs_operational": 3, 00:17:07.054 "base_bdevs_list": [ 00:17:07.054 { 00:17:07.054 "name": "pt1", 00:17:07.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.054 "is_configured": true, 00:17:07.054 "data_offset": 2048, 00:17:07.054 "data_size": 63488 00:17:07.054 }, 00:17:07.054 { 00:17:07.054 "name": "pt2", 00:17:07.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.054 "is_configured": true, 00:17:07.054 "data_offset": 2048, 00:17:07.054 "data_size": 63488 00:17:07.054 }, 00:17:07.054 { 00:17:07.054 "name": "pt3", 00:17:07.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.054 "is_configured": true, 00:17:07.054 "data_offset": 2048, 00:17:07.054 "data_size": 63488 00:17:07.054 } 00:17:07.054 ] 00:17:07.054 }' 00:17:07.054 19:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.054 19:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:07.621 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:07.880 [2024-06-10 19:01:22.535954] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.880 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:07.880 "name": "raid_bdev1", 00:17:07.880 "aliases": [ 00:17:07.880 "2b2bc0d4-0d08-4946-b34d-487d1f177320" 00:17:07.880 ], 00:17:07.880 "product_name": "Raid Volume", 00:17:07.880 "block_size": 512, 00:17:07.880 "num_blocks": 190464, 00:17:07.880 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:07.880 "assigned_rate_limits": { 00:17:07.880 "rw_ios_per_sec": 0, 00:17:07.880 "rw_mbytes_per_sec": 0, 00:17:07.880 "r_mbytes_per_sec": 0, 00:17:07.880 "w_mbytes_per_sec": 0 00:17:07.880 }, 00:17:07.880 "claimed": false, 00:17:07.880 "zoned": false, 00:17:07.880 "supported_io_types": { 00:17:07.880 "read": true, 00:17:07.880 "write": true, 00:17:07.880 "unmap": true, 00:17:07.881 "write_zeroes": true, 00:17:07.881 "flush": true, 00:17:07.881 "reset": true, 00:17:07.881 "compare": false, 00:17:07.881 "compare_and_write": false, 00:17:07.881 "abort": false, 00:17:07.881 "nvme_admin": false, 00:17:07.881 "nvme_io": false 00:17:07.881 }, 00:17:07.881 "memory_domains": [ 00:17:07.881 { 00:17:07.881 "dma_device_id": "system", 00:17:07.881 "dma_device_type": 1 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.881 "dma_device_type": 2 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "dma_device_id": "system", 00:17:07.881 "dma_device_type": 1 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.881 "dma_device_type": 2 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "dma_device_id": "system", 00:17:07.881 "dma_device_type": 1 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.881 "dma_device_type": 2 00:17:07.881 } 00:17:07.881 ], 00:17:07.881 "driver_specific": { 00:17:07.881 "raid": { 00:17:07.881 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:07.881 "strip_size_kb": 64, 00:17:07.881 "state": "online", 00:17:07.881 "raid_level": "concat", 00:17:07.881 "superblock": true, 00:17:07.881 "num_base_bdevs": 3, 00:17:07.881 "num_base_bdevs_discovered": 3, 00:17:07.881 "num_base_bdevs_operational": 3, 00:17:07.881 "base_bdevs_list": [ 00:17:07.881 { 00:17:07.881 "name": "pt1", 00:17:07.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.881 "is_configured": true, 00:17:07.881 "data_offset": 2048, 00:17:07.881 "data_size": 63488 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "name": "pt2", 00:17:07.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.881 "is_configured": true, 00:17:07.881 "data_offset": 2048, 00:17:07.881 "data_size": 63488 00:17:07.881 }, 00:17:07.881 { 00:17:07.881 "name": "pt3", 00:17:07.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.881 "is_configured": true, 00:17:07.881 "data_offset": 2048, 00:17:07.881 "data_size": 63488 00:17:07.881 } 00:17:07.881 ] 00:17:07.881 } 00:17:07.881 } 00:17:07.881 }' 00:17:07.881 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.881 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:07.881 pt2 00:17:07.881 pt3' 00:17:07.881 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:07.881 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:07.881 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:08.140 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:08.140 "name": "pt1", 00:17:08.140 "aliases": [ 00:17:08.140 "00000000-0000-0000-0000-000000000001" 00:17:08.140 ], 00:17:08.140 "product_name": "passthru", 00:17:08.140 "block_size": 512, 00:17:08.140 "num_blocks": 65536, 00:17:08.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.140 "assigned_rate_limits": { 00:17:08.140 "rw_ios_per_sec": 0, 00:17:08.140 "rw_mbytes_per_sec": 0, 00:17:08.140 "r_mbytes_per_sec": 0, 00:17:08.140 "w_mbytes_per_sec": 0 00:17:08.140 }, 00:17:08.140 "claimed": true, 00:17:08.140 "claim_type": "exclusive_write", 00:17:08.140 "zoned": false, 00:17:08.140 "supported_io_types": { 00:17:08.140 "read": true, 00:17:08.140 "write": true, 00:17:08.140 "unmap": true, 00:17:08.140 "write_zeroes": true, 00:17:08.140 "flush": true, 00:17:08.140 "reset": true, 00:17:08.140 "compare": false, 00:17:08.140 "compare_and_write": false, 00:17:08.140 "abort": true, 00:17:08.140 "nvme_admin": false, 00:17:08.140 "nvme_io": false 00:17:08.140 }, 00:17:08.140 "memory_domains": [ 00:17:08.140 { 00:17:08.140 "dma_device_id": "system", 00:17:08.140 "dma_device_type": 1 00:17:08.140 }, 00:17:08.140 { 00:17:08.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.140 "dma_device_type": 2 00:17:08.140 } 00:17:08.140 ], 00:17:08.140 "driver_specific": { 00:17:08.140 "passthru": { 00:17:08.140 "name": "pt1", 00:17:08.140 "base_bdev_name": "malloc1" 00:17:08.140 } 00:17:08.140 } 00:17:08.140 }' 00:17:08.140 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.140 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.399 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:08.399 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.399 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.399 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:08.399 19:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.399 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.399 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:08.399 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.399 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.659 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:08.659 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:08.659 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:08.659 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:08.659 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:08.659 "name": "pt2", 00:17:08.659 "aliases": [ 00:17:08.659 "00000000-0000-0000-0000-000000000002" 00:17:08.659 ], 00:17:08.659 "product_name": "passthru", 00:17:08.659 "block_size": 512, 00:17:08.659 "num_blocks": 65536, 00:17:08.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.659 "assigned_rate_limits": { 00:17:08.659 "rw_ios_per_sec": 0, 00:17:08.659 "rw_mbytes_per_sec": 0, 00:17:08.659 "r_mbytes_per_sec": 0, 00:17:08.659 "w_mbytes_per_sec": 0 00:17:08.659 }, 00:17:08.659 "claimed": true, 00:17:08.659 "claim_type": "exclusive_write", 00:17:08.659 "zoned": false, 00:17:08.659 "supported_io_types": { 00:17:08.659 "read": true, 00:17:08.659 "write": true, 00:17:08.659 "unmap": true, 00:17:08.659 "write_zeroes": true, 00:17:08.659 "flush": true, 00:17:08.659 "reset": true, 00:17:08.659 "compare": false, 00:17:08.659 "compare_and_write": false, 00:17:08.659 "abort": true, 00:17:08.659 "nvme_admin": false, 00:17:08.659 "nvme_io": false 00:17:08.659 }, 00:17:08.659 "memory_domains": [ 00:17:08.659 { 00:17:08.659 "dma_device_id": "system", 00:17:08.659 "dma_device_type": 1 00:17:08.659 }, 00:17:08.659 { 00:17:08.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.659 "dma_device_type": 2 00:17:08.659 } 00:17:08.659 ], 00:17:08.659 "driver_specific": { 00:17:08.659 "passthru": { 00:17:08.659 "name": "pt2", 00:17:08.659 "base_bdev_name": "malloc2" 00:17:08.659 } 00:17:08.659 } 00:17:08.659 }' 00:17:08.659 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:08.918 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.176 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.176 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:09.176 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.176 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:09.176 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:09.445 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:09.445 "name": "pt3", 00:17:09.445 "aliases": [ 00:17:09.445 "00000000-0000-0000-0000-000000000003" 00:17:09.445 ], 00:17:09.445 "product_name": "passthru", 00:17:09.445 "block_size": 512, 00:17:09.445 "num_blocks": 65536, 00:17:09.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.445 "assigned_rate_limits": { 00:17:09.445 "rw_ios_per_sec": 0, 00:17:09.445 "rw_mbytes_per_sec": 0, 00:17:09.445 "r_mbytes_per_sec": 0, 00:17:09.445 "w_mbytes_per_sec": 0 00:17:09.445 }, 00:17:09.445 "claimed": true, 00:17:09.445 "claim_type": "exclusive_write", 00:17:09.445 "zoned": false, 00:17:09.445 "supported_io_types": { 00:17:09.445 "read": true, 00:17:09.445 "write": true, 00:17:09.445 "unmap": true, 00:17:09.445 "write_zeroes": true, 00:17:09.445 "flush": true, 00:17:09.445 "reset": true, 00:17:09.445 "compare": false, 00:17:09.445 "compare_and_write": false, 00:17:09.445 "abort": true, 00:17:09.445 "nvme_admin": false, 00:17:09.445 "nvme_io": false 00:17:09.445 }, 00:17:09.445 "memory_domains": [ 00:17:09.445 { 00:17:09.445 "dma_device_id": "system", 00:17:09.445 "dma_device_type": 1 00:17:09.445 }, 00:17:09.445 { 00:17:09.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.446 "dma_device_type": 2 00:17:09.446 } 00:17:09.446 ], 00:17:09.446 "driver_specific": { 00:17:09.446 "passthru": { 00:17:09.446 "name": "pt3", 00:17:09.446 "base_bdev_name": "malloc3" 00:17:09.446 } 00:17:09.446 } 00:17:09.446 }' 00:17:09.446 19:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.446 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.709 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:09.709 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.709 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.709 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:09.709 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:09.709 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:09.974 [2024-06-10 19:01:24.517181] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.974 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2b2bc0d4-0d08-4946-b34d-487d1f177320 00:17:09.974 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2b2bc0d4-0d08-4946-b34d-487d1f177320 ']' 00:17:09.974 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:10.234 [2024-06-10 19:01:24.741539] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.234 [2024-06-10 19:01:24.741557] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.234 [2024-06-10 19:01:24.741607] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.234 [2024-06-10 19:01:24.741653] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.234 [2024-06-10 19:01:24.741664] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1646210 name raid_bdev1, state offline 00:17:10.234 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.234 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:10.493 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:10.493 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:10.493 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.493 19:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:10.493 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.493 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:10.753 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.753 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:11.012 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:11.012 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:11.271 19:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:11.531 [2024-06-10 19:01:26.113086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:11.531 [2024-06-10 19:01:26.114348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:11.531 [2024-06-10 19:01:26.114389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:11.531 [2024-06-10 19:01:26.114431] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:11.531 [2024-06-10 19:01:26.114467] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:11.531 [2024-06-10 19:01:26.114488] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:11.531 [2024-06-10 19:01:26.114505] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.531 [2024-06-10 19:01:26.114514] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x164e010 name raid_bdev1, state configuring 00:17:11.531 request: 00:17:11.531 { 00:17:11.531 "name": "raid_bdev1", 00:17:11.531 "raid_level": "concat", 00:17:11.531 "base_bdevs": [ 00:17:11.531 "malloc1", 00:17:11.531 "malloc2", 00:17:11.531 "malloc3" 00:17:11.531 ], 00:17:11.531 "superblock": false, 00:17:11.531 "strip_size_kb": 64, 00:17:11.531 "method": "bdev_raid_create", 00:17:11.531 "req_id": 1 00:17:11.531 } 00:17:11.531 Got JSON-RPC error response 00:17:11.531 response: 00:17:11.531 { 00:17:11.531 "code": -17, 00:17:11.531 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:11.531 } 00:17:11.531 19:01:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:17:11.531 19:01:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:11.531 19:01:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:11.531 19:01:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:11.531 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.531 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:11.789 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:11.789 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:11.789 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.049 [2024-06-10 19:01:26.570229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.049 [2024-06-10 19:01:26.570270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.049 [2024-06-10 19:01:26.570287] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164dda0 00:17:12.049 [2024-06-10 19:01:26.570298] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.049 [2024-06-10 19:01:26.571780] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.049 [2024-06-10 19:01:26.571808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.049 [2024-06-10 19:01:26.571870] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:12.049 [2024-06-10 19:01:26.571892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.049 pt1 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.049 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.308 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.308 "name": "raid_bdev1", 00:17:12.308 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:12.308 "strip_size_kb": 64, 00:17:12.308 "state": "configuring", 00:17:12.308 "raid_level": "concat", 00:17:12.308 "superblock": true, 00:17:12.308 "num_base_bdevs": 3, 00:17:12.308 "num_base_bdevs_discovered": 1, 00:17:12.308 "num_base_bdevs_operational": 3, 00:17:12.308 "base_bdevs_list": [ 00:17:12.308 { 00:17:12.308 "name": "pt1", 00:17:12.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.308 "is_configured": true, 00:17:12.308 "data_offset": 2048, 00:17:12.308 "data_size": 63488 00:17:12.308 }, 00:17:12.308 { 00:17:12.308 "name": null, 00:17:12.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.308 "is_configured": false, 00:17:12.308 "data_offset": 2048, 00:17:12.308 "data_size": 63488 00:17:12.308 }, 00:17:12.308 { 00:17:12.308 "name": null, 00:17:12.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.308 "is_configured": false, 00:17:12.308 "data_offset": 2048, 00:17:12.308 "data_size": 63488 00:17:12.308 } 00:17:12.308 ] 00:17:12.308 }' 00:17:12.308 19:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.308 19:01:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.876 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:12.876 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.876 [2024-06-10 19:01:27.608964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.876 [2024-06-10 19:01:27.609011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.876 [2024-06-10 19:01:27.609030] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x164e010 00:17:12.876 [2024-06-10 19:01:27.609042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.876 [2024-06-10 19:01:27.609344] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.876 [2024-06-10 19:01:27.609359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.876 [2024-06-10 19:01:27.609415] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:12.876 [2024-06-10 19:01:27.609433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.876 pt2 00:17:12.876 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:13.135 [2024-06-10 19:01:27.833557] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.135 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.136 19:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.394 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.394 "name": "raid_bdev1", 00:17:13.394 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:13.394 "strip_size_kb": 64, 00:17:13.394 "state": "configuring", 00:17:13.394 "raid_level": "concat", 00:17:13.394 "superblock": true, 00:17:13.394 "num_base_bdevs": 3, 00:17:13.394 "num_base_bdevs_discovered": 1, 00:17:13.394 "num_base_bdevs_operational": 3, 00:17:13.394 "base_bdevs_list": [ 00:17:13.394 { 00:17:13.394 "name": "pt1", 00:17:13.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.394 "is_configured": true, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 }, 00:17:13.394 { 00:17:13.394 "name": null, 00:17:13.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.394 "is_configured": false, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 }, 00:17:13.394 { 00:17:13.394 "name": null, 00:17:13.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.394 "is_configured": false, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 } 00:17:13.394 ] 00:17:13.394 }' 00:17:13.394 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.394 19:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:13.962 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:13.962 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.222 [2024-06-10 19:01:28.848225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.222 [2024-06-10 19:01:28.848270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.222 [2024-06-10 19:01:28.848287] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16448e0 00:17:14.222 [2024-06-10 19:01:28.848299] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.222 [2024-06-10 19:01:28.848627] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.222 [2024-06-10 19:01:28.848644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.222 [2024-06-10 19:01:28.848701] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:14.222 [2024-06-10 19:01:28.848718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.222 pt2 00:17:14.222 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:14.222 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:14.222 19:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:14.481 [2024-06-10 19:01:29.076836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:14.481 [2024-06-10 19:01:29.076878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.481 [2024-06-10 19:01:29.076893] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1646be0 00:17:14.481 [2024-06-10 19:01:29.076905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.481 [2024-06-10 19:01:29.077204] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.481 [2024-06-10 19:01:29.077220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:14.481 [2024-06-10 19:01:29.077274] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:14.481 [2024-06-10 19:01:29.077290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:14.481 [2024-06-10 19:01:29.077385] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1646e90 00:17:14.481 [2024-06-10 19:01:29.077394] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:14.481 [2024-06-10 19:01:29.077547] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16497f0 00:17:14.481 [2024-06-10 19:01:29.077669] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1646e90 00:17:14.481 [2024-06-10 19:01:29.077679] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1646e90 00:17:14.481 [2024-06-10 19:01:29.077766] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.481 pt3 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.481 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.740 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.740 "name": "raid_bdev1", 00:17:14.740 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:14.740 "strip_size_kb": 64, 00:17:14.740 "state": "online", 00:17:14.740 "raid_level": "concat", 00:17:14.740 "superblock": true, 00:17:14.740 "num_base_bdevs": 3, 00:17:14.740 "num_base_bdevs_discovered": 3, 00:17:14.740 "num_base_bdevs_operational": 3, 00:17:14.740 "base_bdevs_list": [ 00:17:14.740 { 00:17:14.740 "name": "pt1", 00:17:14.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.740 "is_configured": true, 00:17:14.740 "data_offset": 2048, 00:17:14.740 "data_size": 63488 00:17:14.740 }, 00:17:14.740 { 00:17:14.740 "name": "pt2", 00:17:14.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.740 "is_configured": true, 00:17:14.740 "data_offset": 2048, 00:17:14.741 "data_size": 63488 00:17:14.741 }, 00:17:14.741 { 00:17:14.741 "name": "pt3", 00:17:14.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.741 "is_configured": true, 00:17:14.741 "data_offset": 2048, 00:17:14.741 "data_size": 63488 00:17:14.741 } 00:17:14.741 ] 00:17:14.741 }' 00:17:14.741 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.741 19:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:15.309 19:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:15.569 [2024-06-10 19:01:30.075712] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.569 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:15.569 "name": "raid_bdev1", 00:17:15.569 "aliases": [ 00:17:15.569 "2b2bc0d4-0d08-4946-b34d-487d1f177320" 00:17:15.569 ], 00:17:15.569 "product_name": "Raid Volume", 00:17:15.569 "block_size": 512, 00:17:15.569 "num_blocks": 190464, 00:17:15.569 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:15.569 "assigned_rate_limits": { 00:17:15.569 "rw_ios_per_sec": 0, 00:17:15.569 "rw_mbytes_per_sec": 0, 00:17:15.569 "r_mbytes_per_sec": 0, 00:17:15.569 "w_mbytes_per_sec": 0 00:17:15.569 }, 00:17:15.569 "claimed": false, 00:17:15.569 "zoned": false, 00:17:15.569 "supported_io_types": { 00:17:15.569 "read": true, 00:17:15.569 "write": true, 00:17:15.569 "unmap": true, 00:17:15.569 "write_zeroes": true, 00:17:15.569 "flush": true, 00:17:15.569 "reset": true, 00:17:15.569 "compare": false, 00:17:15.569 "compare_and_write": false, 00:17:15.569 "abort": false, 00:17:15.569 "nvme_admin": false, 00:17:15.569 "nvme_io": false 00:17:15.569 }, 00:17:15.569 "memory_domains": [ 00:17:15.569 { 00:17:15.569 "dma_device_id": "system", 00:17:15.569 "dma_device_type": 1 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.569 "dma_device_type": 2 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "dma_device_id": "system", 00:17:15.569 "dma_device_type": 1 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.569 "dma_device_type": 2 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "dma_device_id": "system", 00:17:15.569 "dma_device_type": 1 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.569 "dma_device_type": 2 00:17:15.569 } 00:17:15.569 ], 00:17:15.569 "driver_specific": { 00:17:15.569 "raid": { 00:17:15.569 "uuid": "2b2bc0d4-0d08-4946-b34d-487d1f177320", 00:17:15.569 "strip_size_kb": 64, 00:17:15.569 "state": "online", 00:17:15.569 "raid_level": "concat", 00:17:15.569 "superblock": true, 00:17:15.569 "num_base_bdevs": 3, 00:17:15.569 "num_base_bdevs_discovered": 3, 00:17:15.569 "num_base_bdevs_operational": 3, 00:17:15.569 "base_bdevs_list": [ 00:17:15.569 { 00:17:15.569 "name": "pt1", 00:17:15.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.569 "is_configured": true, 00:17:15.569 "data_offset": 2048, 00:17:15.569 "data_size": 63488 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "name": "pt2", 00:17:15.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.569 "is_configured": true, 00:17:15.569 "data_offset": 2048, 00:17:15.569 "data_size": 63488 00:17:15.569 }, 00:17:15.569 { 00:17:15.569 "name": "pt3", 00:17:15.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.569 "is_configured": true, 00:17:15.569 "data_offset": 2048, 00:17:15.569 "data_size": 63488 00:17:15.569 } 00:17:15.569 ] 00:17:15.569 } 00:17:15.569 } 00:17:15.569 }' 00:17:15.569 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.569 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:15.569 pt2 00:17:15.569 pt3' 00:17:15.569 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.569 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:15.569 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.828 "name": "pt1", 00:17:15.828 "aliases": [ 00:17:15.828 "00000000-0000-0000-0000-000000000001" 00:17:15.828 ], 00:17:15.828 "product_name": "passthru", 00:17:15.828 "block_size": 512, 00:17:15.828 "num_blocks": 65536, 00:17:15.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.828 "assigned_rate_limits": { 00:17:15.828 "rw_ios_per_sec": 0, 00:17:15.828 "rw_mbytes_per_sec": 0, 00:17:15.828 "r_mbytes_per_sec": 0, 00:17:15.828 "w_mbytes_per_sec": 0 00:17:15.828 }, 00:17:15.828 "claimed": true, 00:17:15.828 "claim_type": "exclusive_write", 00:17:15.828 "zoned": false, 00:17:15.828 "supported_io_types": { 00:17:15.828 "read": true, 00:17:15.828 "write": true, 00:17:15.828 "unmap": true, 00:17:15.828 "write_zeroes": true, 00:17:15.828 "flush": true, 00:17:15.828 "reset": true, 00:17:15.828 "compare": false, 00:17:15.828 "compare_and_write": false, 00:17:15.828 "abort": true, 00:17:15.828 "nvme_admin": false, 00:17:15.828 "nvme_io": false 00:17:15.828 }, 00:17:15.828 "memory_domains": [ 00:17:15.828 { 00:17:15.828 "dma_device_id": "system", 00:17:15.828 "dma_device_type": 1 00:17:15.828 }, 00:17:15.828 { 00:17:15.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.828 "dma_device_type": 2 00:17:15.828 } 00:17:15.828 ], 00:17:15.828 "driver_specific": { 00:17:15.828 "passthru": { 00:17:15.828 "name": "pt1", 00:17:15.828 "base_bdev_name": "malloc1" 00:17:15.828 } 00:17:15.828 } 00:17:15.828 }' 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.828 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:16.114 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.429 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.429 "name": "pt2", 00:17:16.429 "aliases": [ 00:17:16.429 "00000000-0000-0000-0000-000000000002" 00:17:16.429 ], 00:17:16.429 "product_name": "passthru", 00:17:16.429 "block_size": 512, 00:17:16.429 "num_blocks": 65536, 00:17:16.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.429 "assigned_rate_limits": { 00:17:16.429 "rw_ios_per_sec": 0, 00:17:16.429 "rw_mbytes_per_sec": 0, 00:17:16.429 "r_mbytes_per_sec": 0, 00:17:16.429 "w_mbytes_per_sec": 0 00:17:16.429 }, 00:17:16.429 "claimed": true, 00:17:16.429 "claim_type": "exclusive_write", 00:17:16.429 "zoned": false, 00:17:16.429 "supported_io_types": { 00:17:16.429 "read": true, 00:17:16.429 "write": true, 00:17:16.429 "unmap": true, 00:17:16.429 "write_zeroes": true, 00:17:16.429 "flush": true, 00:17:16.429 "reset": true, 00:17:16.429 "compare": false, 00:17:16.429 "compare_and_write": false, 00:17:16.429 "abort": true, 00:17:16.429 "nvme_admin": false, 00:17:16.429 "nvme_io": false 00:17:16.429 }, 00:17:16.429 "memory_domains": [ 00:17:16.429 { 00:17:16.429 "dma_device_id": "system", 00:17:16.429 "dma_device_type": 1 00:17:16.429 }, 00:17:16.429 { 00:17:16.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.429 "dma_device_type": 2 00:17:16.429 } 00:17:16.429 ], 00:17:16.429 "driver_specific": { 00:17:16.429 "passthru": { 00:17:16.429 "name": "pt2", 00:17:16.429 "base_bdev_name": "malloc2" 00:17:16.429 } 00:17:16.429 } 00:17:16.429 }' 00:17:16.429 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.429 19:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.429 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:16.429 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.429 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.429 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:16.429 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.429 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:16.688 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.947 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.947 "name": "pt3", 00:17:16.947 "aliases": [ 00:17:16.947 "00000000-0000-0000-0000-000000000003" 00:17:16.947 ], 00:17:16.947 "product_name": "passthru", 00:17:16.947 "block_size": 512, 00:17:16.947 "num_blocks": 65536, 00:17:16.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.947 "assigned_rate_limits": { 00:17:16.947 "rw_ios_per_sec": 0, 00:17:16.948 "rw_mbytes_per_sec": 0, 00:17:16.948 "r_mbytes_per_sec": 0, 00:17:16.948 "w_mbytes_per_sec": 0 00:17:16.948 }, 00:17:16.948 "claimed": true, 00:17:16.948 "claim_type": "exclusive_write", 00:17:16.948 "zoned": false, 00:17:16.948 "supported_io_types": { 00:17:16.948 "read": true, 00:17:16.948 "write": true, 00:17:16.948 "unmap": true, 00:17:16.948 "write_zeroes": true, 00:17:16.948 "flush": true, 00:17:16.948 "reset": true, 00:17:16.948 "compare": false, 00:17:16.948 "compare_and_write": false, 00:17:16.948 "abort": true, 00:17:16.948 "nvme_admin": false, 00:17:16.948 "nvme_io": false 00:17:16.948 }, 00:17:16.948 "memory_domains": [ 00:17:16.948 { 00:17:16.948 "dma_device_id": "system", 00:17:16.948 "dma_device_type": 1 00:17:16.948 }, 00:17:16.948 { 00:17:16.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.948 "dma_device_type": 2 00:17:16.948 } 00:17:16.948 ], 00:17:16.948 "driver_specific": { 00:17:16.948 "passthru": { 00:17:16.948 "name": "pt3", 00:17:16.948 "base_bdev_name": "malloc3" 00:17:16.948 } 00:17:16.948 } 00:17:16.948 }' 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:16.948 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:17.207 19:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:17.466 [2024-06-10 19:01:32.060968] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2b2bc0d4-0d08-4946-b34d-487d1f177320 '!=' 2b2bc0d4-0d08-4946-b34d-487d1f177320 ']' 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1668976 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1668976 ']' 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1668976 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1668976 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1668976' 00:17:17.466 killing process with pid 1668976 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1668976 00:17:17.466 [2024-06-10 19:01:32.137742] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.466 [2024-06-10 19:01:32.137795] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.466 [2024-06-10 19:01:32.137841] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.466 [2024-06-10 19:01:32.137852] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1646e90 name raid_bdev1, state offline 00:17:17.466 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1668976 00:17:17.466 [2024-06-10 19:01:32.161124] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.726 19:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:17.726 00:17:17.726 real 0m13.365s 00:17:17.726 user 0m24.031s 00:17:17.726 sys 0m2.448s 00:17:17.726 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:17.726 19:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.726 ************************************ 00:17:17.726 END TEST raid_superblock_test 00:17:17.726 ************************************ 00:17:17.726 19:01:32 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:17:17.726 19:01:32 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:17.726 19:01:32 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:17.726 19:01:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.726 ************************************ 00:17:17.726 START TEST raid_read_error_test 00:17:17.726 ************************************ 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 3 read 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.j1V2mm54zX 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1671666 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1671666 /var/tmp/spdk-raid.sock 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1671666 ']' 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:17.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:17.726 19:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.985 [2024-06-10 19:01:32.508730] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:17:17.985 [2024-06-10 19:01:32.508786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671666 ] 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.985 EAL: Requested device 0000:b6:01.0 cannot be used 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.985 EAL: Requested device 0000:b6:01.1 cannot be used 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.985 EAL: Requested device 0000:b6:01.2 cannot be used 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.985 EAL: Requested device 0000:b6:01.3 cannot be used 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.985 EAL: Requested device 0000:b6:01.4 cannot be used 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.985 EAL: Requested device 0000:b6:01.5 cannot be used 00:17:17.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:01.6 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:01.7 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.0 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.1 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.2 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.3 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.4 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.5 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.6 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b6:02.7 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.0 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.1 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.2 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.3 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.4 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.5 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.6 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:01.7 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.0 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.1 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.2 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.3 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.4 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.5 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.6 cannot be used 00:17:17.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.986 EAL: Requested device 0000:b8:02.7 cannot be used 00:17:17.986 [2024-06-10 19:01:32.641788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.986 [2024-06-10 19:01:32.727836] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.245 [2024-06-10 19:01:32.782671] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.245 [2024-06-10 19:01:32.782699] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.813 19:01:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:18.813 19:01:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:17:18.813 19:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:18.813 19:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:19.072 BaseBdev1_malloc 00:17:19.072 19:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:19.331 true 00:17:19.331 19:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:19.331 [2024-06-10 19:01:34.066379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:19.331 [2024-06-10 19:01:34.066418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.331 [2024-06-10 19:01:34.066437] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15b2d50 00:17:19.331 [2024-06-10 19:01:34.066448] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.331 [2024-06-10 19:01:34.068011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.331 [2024-06-10 19:01:34.068038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:19.331 BaseBdev1 00:17:19.331 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:19.331 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:19.591 BaseBdev2_malloc 00:17:19.591 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:19.851 true 00:17:19.851 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:20.111 [2024-06-10 19:01:34.728373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:20.111 [2024-06-10 19:01:34.728410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.111 [2024-06-10 19:01:34.728429] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15b82e0 00:17:20.111 [2024-06-10 19:01:34.728440] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.111 [2024-06-10 19:01:34.729813] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.111 [2024-06-10 19:01:34.729839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:20.111 BaseBdev2 00:17:20.111 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:20.111 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:20.371 BaseBdev3_malloc 00:17:20.371 19:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:20.631 true 00:17:20.631 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:20.891 [2024-06-10 19:01:35.398491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:20.891 [2024-06-10 19:01:35.398529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.891 [2024-06-10 19:01:35.398547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15b9fd0 00:17:20.891 [2024-06-10 19:01:35.398559] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.891 [2024-06-10 19:01:35.399924] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.891 [2024-06-10 19:01:35.399950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:20.891 BaseBdev3 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:20.891 [2024-06-10 19:01:35.623233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.891 [2024-06-10 19:01:35.624401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.891 [2024-06-10 19:01:35.624465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.891 [2024-06-10 19:01:35.624661] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15bb3f0 00:17:20.891 [2024-06-10 19:01:35.624672] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:20.891 [2024-06-10 19:01:35.624848] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x140e820 00:17:20.891 [2024-06-10 19:01:35.624985] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15bb3f0 00:17:20.891 [2024-06-10 19:01:35.624994] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15bb3f0 00:17:20.891 [2024-06-10 19:01:35.625088] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.891 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.150 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.150 "name": "raid_bdev1", 00:17:21.150 "uuid": "074cd6e6-b704-4050-8dc8-6df92df1ec52", 00:17:21.150 "strip_size_kb": 64, 00:17:21.151 "state": "online", 00:17:21.151 "raid_level": "concat", 00:17:21.151 "superblock": true, 00:17:21.151 "num_base_bdevs": 3, 00:17:21.151 "num_base_bdevs_discovered": 3, 00:17:21.151 "num_base_bdevs_operational": 3, 00:17:21.151 "base_bdevs_list": [ 00:17:21.151 { 00:17:21.151 "name": "BaseBdev1", 00:17:21.151 "uuid": "2fbcc043-4701-5c8f-ae30-b4d37c4fd5f6", 00:17:21.151 "is_configured": true, 00:17:21.151 "data_offset": 2048, 00:17:21.151 "data_size": 63488 00:17:21.151 }, 00:17:21.151 { 00:17:21.151 "name": "BaseBdev2", 00:17:21.151 "uuid": "66b55170-8605-55f3-87e9-0e2a0c3bcb49", 00:17:21.151 "is_configured": true, 00:17:21.151 "data_offset": 2048, 00:17:21.151 "data_size": 63488 00:17:21.151 }, 00:17:21.151 { 00:17:21.151 "name": "BaseBdev3", 00:17:21.151 "uuid": "f4466c08-548a-53a9-b0c1-372bc32c1d83", 00:17:21.151 "is_configured": true, 00:17:21.151 "data_offset": 2048, 00:17:21.151 "data_size": 63488 00:17:21.151 } 00:17:21.151 ] 00:17:21.151 }' 00:17:21.151 19:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.151 19:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.719 19:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:21.719 19:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:21.978 [2024-06-10 19:01:36.509795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x110bd40 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.916 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.176 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.176 "name": "raid_bdev1", 00:17:23.176 "uuid": "074cd6e6-b704-4050-8dc8-6df92df1ec52", 00:17:23.176 "strip_size_kb": 64, 00:17:23.176 "state": "online", 00:17:23.176 "raid_level": "concat", 00:17:23.176 "superblock": true, 00:17:23.176 "num_base_bdevs": 3, 00:17:23.176 "num_base_bdevs_discovered": 3, 00:17:23.176 "num_base_bdevs_operational": 3, 00:17:23.176 "base_bdevs_list": [ 00:17:23.176 { 00:17:23.176 "name": "BaseBdev1", 00:17:23.176 "uuid": "2fbcc043-4701-5c8f-ae30-b4d37c4fd5f6", 00:17:23.176 "is_configured": true, 00:17:23.176 "data_offset": 2048, 00:17:23.176 "data_size": 63488 00:17:23.176 }, 00:17:23.176 { 00:17:23.176 "name": "BaseBdev2", 00:17:23.176 "uuid": "66b55170-8605-55f3-87e9-0e2a0c3bcb49", 00:17:23.176 "is_configured": true, 00:17:23.176 "data_offset": 2048, 00:17:23.176 "data_size": 63488 00:17:23.176 }, 00:17:23.176 { 00:17:23.176 "name": "BaseBdev3", 00:17:23.176 "uuid": "f4466c08-548a-53a9-b0c1-372bc32c1d83", 00:17:23.176 "is_configured": true, 00:17:23.176 "data_offset": 2048, 00:17:23.176 "data_size": 63488 00:17:23.176 } 00:17:23.176 ] 00:17:23.176 }' 00:17:23.176 19:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.176 19:01:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.745 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:24.005 [2024-06-10 19:01:38.660403] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.005 [2024-06-10 19:01:38.660437] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.005 [2024-06-10 19:01:38.663339] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.005 [2024-06-10 19:01:38.663374] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.005 [2024-06-10 19:01:38.663404] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.005 [2024-06-10 19:01:38.663413] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15bb3f0 name raid_bdev1, state offline 00:17:24.005 0 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1671666 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1671666 ']' 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1671666 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1671666 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1671666' 00:17:24.005 killing process with pid 1671666 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1671666 00:17:24.005 [2024-06-10 19:01:38.739407] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.005 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1671666 00:17:24.005 [2024-06-10 19:01:38.757743] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.j1V2mm54zX 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:17:24.265 00:17:24.265 real 0m6.528s 00:17:24.265 user 0m10.307s 00:17:24.265 sys 0m1.122s 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:24.265 19:01:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.265 ************************************ 00:17:24.265 END TEST raid_read_error_test 00:17:24.265 ************************************ 00:17:24.265 19:01:39 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:17:24.265 19:01:39 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:24.265 19:01:39 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:24.265 19:01:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.525 ************************************ 00:17:24.525 START TEST raid_write_error_test 00:17:24.525 ************************************ 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 3 write 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.y9Fzxoq5Dq 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1672829 00:17:24.525 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1672829 /var/tmp/spdk-raid.sock 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1672829 ']' 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:24.526 19:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.526 [2024-06-10 19:01:39.115584] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:17:24.526 [2024-06-10 19:01:39.115630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672829 ] 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.0 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.1 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.2 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.3 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.4 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.5 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.6 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:01.7 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.0 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.1 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.2 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.3 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.4 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.5 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.6 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b6:02.7 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.0 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.1 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.2 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.3 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.4 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.5 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.6 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:01.7 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.0 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.1 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.2 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.3 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.4 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.5 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.6 cannot be used 00:17:24.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:24.526 EAL: Requested device 0000:b8:02.7 cannot be used 00:17:24.526 [2024-06-10 19:01:39.235934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.785 [2024-06-10 19:01:39.321551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.785 [2024-06-10 19:01:39.382212] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.785 [2024-06-10 19:01:39.382250] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.352 19:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:25.352 19:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:17:25.352 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.352 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.610 BaseBdev1_malloc 00:17:25.610 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:25.869 true 00:17:25.869 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:26.128 [2024-06-10 19:01:40.684478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:26.128 [2024-06-10 19:01:40.684520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.128 [2024-06-10 19:01:40.684538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af8d50 00:17:26.128 [2024-06-10 19:01:40.684551] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.128 [2024-06-10 19:01:40.686089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.128 [2024-06-10 19:01:40.686117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.128 BaseBdev1 00:17:26.128 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:26.128 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:26.386 BaseBdev2_malloc 00:17:26.386 19:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:26.386 true 00:17:26.645 19:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:26.645 [2024-06-10 19:01:41.366528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:26.645 [2024-06-10 19:01:41.366566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.645 [2024-06-10 19:01:41.366589] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afe2e0 00:17:26.645 [2024-06-10 19:01:41.366601] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.645 [2024-06-10 19:01:41.367856] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.645 [2024-06-10 19:01:41.367881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.645 BaseBdev2 00:17:26.645 19:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:26.645 19:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:26.905 BaseBdev3_malloc 00:17:26.905 19:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:27.164 true 00:17:27.164 19:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:27.422 [2024-06-10 19:01:42.056621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:27.422 [2024-06-10 19:01:42.056658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.422 [2024-06-10 19:01:42.056675] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afffd0 00:17:27.422 [2024-06-10 19:01:42.056687] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.422 [2024-06-10 19:01:42.057940] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.422 [2024-06-10 19:01:42.057965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:27.423 BaseBdev3 00:17:27.423 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:27.681 [2024-06-10 19:01:42.285243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.681 [2024-06-10 19:01:42.286315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.681 [2024-06-10 19:01:42.286377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.681 [2024-06-10 19:01:42.286563] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b013f0 00:17:27.681 [2024-06-10 19:01:42.286574] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:27.681 [2024-06-10 19:01:42.286737] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1954820 00:17:27.681 [2024-06-10 19:01:42.286870] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b013f0 00:17:27.681 [2024-06-10 19:01:42.286879] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b013f0 00:17:27.681 [2024-06-10 19:01:42.286966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.681 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.940 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.940 "name": "raid_bdev1", 00:17:27.940 "uuid": "980d9946-6ae1-43d5-98d8-101b4ce7503a", 00:17:27.940 "strip_size_kb": 64, 00:17:27.940 "state": "online", 00:17:27.940 "raid_level": "concat", 00:17:27.940 "superblock": true, 00:17:27.940 "num_base_bdevs": 3, 00:17:27.940 "num_base_bdevs_discovered": 3, 00:17:27.940 "num_base_bdevs_operational": 3, 00:17:27.940 "base_bdevs_list": [ 00:17:27.940 { 00:17:27.940 "name": "BaseBdev1", 00:17:27.940 "uuid": "35358dc8-9e5e-5208-9bc7-9a8bd16fb060", 00:17:27.940 "is_configured": true, 00:17:27.940 "data_offset": 2048, 00:17:27.940 "data_size": 63488 00:17:27.940 }, 00:17:27.940 { 00:17:27.940 "name": "BaseBdev2", 00:17:27.940 "uuid": "ef74f165-21ae-565f-a9d6-0ec7423411d2", 00:17:27.940 "is_configured": true, 00:17:27.940 "data_offset": 2048, 00:17:27.940 "data_size": 63488 00:17:27.940 }, 00:17:27.940 { 00:17:27.940 "name": "BaseBdev3", 00:17:27.940 "uuid": "59e2476c-f0db-5ae3-a38f-3b84df9a7dcc", 00:17:27.940 "is_configured": true, 00:17:27.940 "data_offset": 2048, 00:17:27.940 "data_size": 63488 00:17:27.940 } 00:17:27.940 ] 00:17:27.940 }' 00:17:27.940 19:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.940 19:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.509 19:01:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:28.509 19:01:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:28.509 [2024-06-10 19:01:43.203908] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1651d40 00:17:29.447 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.706 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.965 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.965 "name": "raid_bdev1", 00:17:29.965 "uuid": "980d9946-6ae1-43d5-98d8-101b4ce7503a", 00:17:29.965 "strip_size_kb": 64, 00:17:29.965 "state": "online", 00:17:29.965 "raid_level": "concat", 00:17:29.965 "superblock": true, 00:17:29.965 "num_base_bdevs": 3, 00:17:29.965 "num_base_bdevs_discovered": 3, 00:17:29.965 "num_base_bdevs_operational": 3, 00:17:29.965 "base_bdevs_list": [ 00:17:29.965 { 00:17:29.965 "name": "BaseBdev1", 00:17:29.965 "uuid": "35358dc8-9e5e-5208-9bc7-9a8bd16fb060", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 }, 00:17:29.965 { 00:17:29.965 "name": "BaseBdev2", 00:17:29.965 "uuid": "ef74f165-21ae-565f-a9d6-0ec7423411d2", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 }, 00:17:29.965 { 00:17:29.965 "name": "BaseBdev3", 00:17:29.965 "uuid": "59e2476c-f0db-5ae3-a38f-3b84df9a7dcc", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 } 00:17:29.965 ] 00:17:29.965 }' 00:17:29.965 19:01:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.965 19:01:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.533 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:30.793 [2024-06-10 19:01:45.302510] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.793 [2024-06-10 19:01:45.302544] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.793 [2024-06-10 19:01:45.305456] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.793 [2024-06-10 19:01:45.305491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.793 [2024-06-10 19:01:45.305522] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.793 [2024-06-10 19:01:45.305532] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b013f0 name raid_bdev1, state offline 00:17:30.793 0 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1672829 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1672829 ']' 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1672829 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1672829 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1672829' 00:17:30.793 killing process with pid 1672829 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1672829 00:17:30.793 [2024-06-10 19:01:45.381434] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.793 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1672829 00:17:30.793 [2024-06-10 19:01:45.400096] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.y9Fzxoq5Dq 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:17:31.052 00:17:31.052 real 0m6.565s 00:17:31.052 user 0m10.329s 00:17:31.052 sys 0m1.166s 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:31.052 19:01:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.052 ************************************ 00:17:31.052 END TEST raid_write_error_test 00:17:31.052 ************************************ 00:17:31.052 19:01:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:31.052 19:01:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:31.052 19:01:45 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:31.052 19:01:45 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:31.052 19:01:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.052 ************************************ 00:17:31.052 START TEST raid_state_function_test 00:17:31.052 ************************************ 00:17:31.052 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 3 false 00:17:31.052 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:31.052 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1673993 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1673993' 00:17:31.053 Process raid pid: 1673993 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1673993 /var/tmp/spdk-raid.sock 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1673993 ']' 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:31.053 19:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.053 [2024-06-10 19:01:45.768363] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:17:31.053 [2024-06-10 19:01:45.768424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.0 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.1 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.2 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.3 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.4 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.5 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.6 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:01.7 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.0 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.1 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.2 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.3 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.4 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.5 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.6 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b6:02.7 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.0 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.1 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.2 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.3 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.4 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.5 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.6 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:01.7 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.0 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.1 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.2 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.3 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.4 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.5 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.6 cannot be used 00:17:31.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.312 EAL: Requested device 0000:b8:02.7 cannot be used 00:17:31.312 [2024-06-10 19:01:45.902911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.312 [2024-06-10 19:01:45.989152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.312 [2024-06-10 19:01:46.049987] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.312 [2024-06-10 19:01:46.050023] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:32.249 [2024-06-10 19:01:46.876724] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.249 [2024-06-10 19:01:46.876764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.249 [2024-06-10 19:01:46.876774] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.249 [2024-06-10 19:01:46.876785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.249 [2024-06-10 19:01:46.876794] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.249 [2024-06-10 19:01:46.876804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.249 19:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.509 19:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.509 "name": "Existed_Raid", 00:17:32.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.509 "strip_size_kb": 0, 00:17:32.509 "state": "configuring", 00:17:32.509 "raid_level": "raid1", 00:17:32.509 "superblock": false, 00:17:32.509 "num_base_bdevs": 3, 00:17:32.509 "num_base_bdevs_discovered": 0, 00:17:32.509 "num_base_bdevs_operational": 3, 00:17:32.509 "base_bdevs_list": [ 00:17:32.509 { 00:17:32.509 "name": "BaseBdev1", 00:17:32.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.509 "is_configured": false, 00:17:32.509 "data_offset": 0, 00:17:32.509 "data_size": 0 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "BaseBdev2", 00:17:32.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.509 "is_configured": false, 00:17:32.509 "data_offset": 0, 00:17:32.509 "data_size": 0 00:17:32.509 }, 00:17:32.509 { 00:17:32.509 "name": "BaseBdev3", 00:17:32.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.509 "is_configured": false, 00:17:32.509 "data_offset": 0, 00:17:32.509 "data_size": 0 00:17:32.509 } 00:17:32.509 ] 00:17:32.509 }' 00:17:32.509 19:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.509 19:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.077 19:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.336 [2024-06-10 19:01:47.895305] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.336 [2024-06-10 19:01:47.895335] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa37f30 name Existed_Raid, state configuring 00:17:33.336 19:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:33.595 [2024-06-10 19:01:48.119894] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.595 [2024-06-10 19:01:48.119926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.595 [2024-06-10 19:01:48.119935] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.595 [2024-06-10 19:01:48.119946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.595 [2024-06-10 19:01:48.119955] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.595 [2024-06-10 19:01:48.119965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.595 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.854 [2024-06-10 19:01:48.353883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.854 BaseBdev1 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.854 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.113 [ 00:17:34.113 { 00:17:34.113 "name": "BaseBdev1", 00:17:34.113 "aliases": [ 00:17:34.113 "d68f6c43-383c-4131-bde9-27d305172aaa" 00:17:34.113 ], 00:17:34.113 "product_name": "Malloc disk", 00:17:34.113 "block_size": 512, 00:17:34.113 "num_blocks": 65536, 00:17:34.113 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:34.113 "assigned_rate_limits": { 00:17:34.113 "rw_ios_per_sec": 0, 00:17:34.113 "rw_mbytes_per_sec": 0, 00:17:34.113 "r_mbytes_per_sec": 0, 00:17:34.113 "w_mbytes_per_sec": 0 00:17:34.113 }, 00:17:34.113 "claimed": true, 00:17:34.113 "claim_type": "exclusive_write", 00:17:34.113 "zoned": false, 00:17:34.113 "supported_io_types": { 00:17:34.113 "read": true, 00:17:34.113 "write": true, 00:17:34.113 "unmap": true, 00:17:34.113 "write_zeroes": true, 00:17:34.113 "flush": true, 00:17:34.113 "reset": true, 00:17:34.113 "compare": false, 00:17:34.113 "compare_and_write": false, 00:17:34.113 "abort": true, 00:17:34.113 "nvme_admin": false, 00:17:34.113 "nvme_io": false 00:17:34.113 }, 00:17:34.113 "memory_domains": [ 00:17:34.113 { 00:17:34.113 "dma_device_id": "system", 00:17:34.113 "dma_device_type": 1 00:17:34.113 }, 00:17:34.113 { 00:17:34.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.113 "dma_device_type": 2 00:17:34.113 } 00:17:34.113 ], 00:17:34.113 "driver_specific": {} 00:17:34.113 } 00:17:34.113 ] 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.113 19:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.372 19:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.372 "name": "Existed_Raid", 00:17:34.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.372 "strip_size_kb": 0, 00:17:34.372 "state": "configuring", 00:17:34.372 "raid_level": "raid1", 00:17:34.372 "superblock": false, 00:17:34.372 "num_base_bdevs": 3, 00:17:34.372 "num_base_bdevs_discovered": 1, 00:17:34.372 "num_base_bdevs_operational": 3, 00:17:34.372 "base_bdevs_list": [ 00:17:34.372 { 00:17:34.372 "name": "BaseBdev1", 00:17:34.372 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:34.372 "is_configured": true, 00:17:34.372 "data_offset": 0, 00:17:34.372 "data_size": 65536 00:17:34.372 }, 00:17:34.372 { 00:17:34.372 "name": "BaseBdev2", 00:17:34.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.372 "is_configured": false, 00:17:34.372 "data_offset": 0, 00:17:34.372 "data_size": 0 00:17:34.372 }, 00:17:34.372 { 00:17:34.372 "name": "BaseBdev3", 00:17:34.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.372 "is_configured": false, 00:17:34.372 "data_offset": 0, 00:17:34.372 "data_size": 0 00:17:34.372 } 00:17:34.372 ] 00:17:34.372 }' 00:17:34.372 19:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.372 19:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.940 19:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:35.199 [2024-06-10 19:01:49.829762] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.199 [2024-06-10 19:01:49.829794] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa37800 name Existed_Raid, state configuring 00:17:35.199 19:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:35.497 [2024-06-10 19:01:50.054382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.497 [2024-06-10 19:01:50.055774] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.497 [2024-06-10 19:01:50.055804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.497 [2024-06-10 19:01:50.055815] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.497 [2024-06-10 19:01:50.055826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.497 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.778 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.778 "name": "Existed_Raid", 00:17:35.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.778 "strip_size_kb": 0, 00:17:35.778 "state": "configuring", 00:17:35.778 "raid_level": "raid1", 00:17:35.778 "superblock": false, 00:17:35.778 "num_base_bdevs": 3, 00:17:35.778 "num_base_bdevs_discovered": 1, 00:17:35.778 "num_base_bdevs_operational": 3, 00:17:35.778 "base_bdevs_list": [ 00:17:35.778 { 00:17:35.778 "name": "BaseBdev1", 00:17:35.778 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:35.778 "is_configured": true, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 65536 00:17:35.778 }, 00:17:35.778 { 00:17:35.778 "name": "BaseBdev2", 00:17:35.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.778 "is_configured": false, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 0 00:17:35.778 }, 00:17:35.778 { 00:17:35.778 "name": "BaseBdev3", 00:17:35.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.778 "is_configured": false, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 0 00:17:35.778 } 00:17:35.778 ] 00:17:35.778 }' 00:17:35.778 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.778 19:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.345 19:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.346 [2024-06-10 19:01:51.056267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.346 BaseBdev2 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:36.346 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.604 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.862 [ 00:17:36.862 { 00:17:36.862 "name": "BaseBdev2", 00:17:36.862 "aliases": [ 00:17:36.862 "e888824a-7371-4f06-8efe-30aa97fcbd19" 00:17:36.862 ], 00:17:36.862 "product_name": "Malloc disk", 00:17:36.862 "block_size": 512, 00:17:36.862 "num_blocks": 65536, 00:17:36.862 "uuid": "e888824a-7371-4f06-8efe-30aa97fcbd19", 00:17:36.862 "assigned_rate_limits": { 00:17:36.862 "rw_ios_per_sec": 0, 00:17:36.862 "rw_mbytes_per_sec": 0, 00:17:36.862 "r_mbytes_per_sec": 0, 00:17:36.862 "w_mbytes_per_sec": 0 00:17:36.862 }, 00:17:36.862 "claimed": true, 00:17:36.862 "claim_type": "exclusive_write", 00:17:36.862 "zoned": false, 00:17:36.862 "supported_io_types": { 00:17:36.862 "read": true, 00:17:36.862 "write": true, 00:17:36.862 "unmap": true, 00:17:36.862 "write_zeroes": true, 00:17:36.862 "flush": true, 00:17:36.862 "reset": true, 00:17:36.862 "compare": false, 00:17:36.862 "compare_and_write": false, 00:17:36.862 "abort": true, 00:17:36.862 "nvme_admin": false, 00:17:36.862 "nvme_io": false 00:17:36.862 }, 00:17:36.862 "memory_domains": [ 00:17:36.862 { 00:17:36.862 "dma_device_id": "system", 00:17:36.862 "dma_device_type": 1 00:17:36.862 }, 00:17:36.862 { 00:17:36.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.862 "dma_device_type": 2 00:17:36.862 } 00:17:36.862 ], 00:17:36.862 "driver_specific": {} 00:17:36.862 } 00:17:36.862 ] 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.862 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.121 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.121 "name": "Existed_Raid", 00:17:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.121 "strip_size_kb": 0, 00:17:37.121 "state": "configuring", 00:17:37.121 "raid_level": "raid1", 00:17:37.121 "superblock": false, 00:17:37.121 "num_base_bdevs": 3, 00:17:37.121 "num_base_bdevs_discovered": 2, 00:17:37.121 "num_base_bdevs_operational": 3, 00:17:37.121 "base_bdevs_list": [ 00:17:37.121 { 00:17:37.121 "name": "BaseBdev1", 00:17:37.121 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:37.121 "is_configured": true, 00:17:37.121 "data_offset": 0, 00:17:37.121 "data_size": 65536 00:17:37.121 }, 00:17:37.121 { 00:17:37.121 "name": "BaseBdev2", 00:17:37.121 "uuid": "e888824a-7371-4f06-8efe-30aa97fcbd19", 00:17:37.121 "is_configured": true, 00:17:37.121 "data_offset": 0, 00:17:37.121 "data_size": 65536 00:17:37.121 }, 00:17:37.121 { 00:17:37.121 "name": "BaseBdev3", 00:17:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.121 "is_configured": false, 00:17:37.121 "data_offset": 0, 00:17:37.121 "data_size": 0 00:17:37.121 } 00:17:37.121 ] 00:17:37.121 }' 00:17:37.121 19:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.121 19:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.688 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:37.947 [2024-06-10 19:01:52.511276] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.947 [2024-06-10 19:01:52.511308] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa386f0 00:17:37.947 [2024-06-10 19:01:52.511316] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:37.947 [2024-06-10 19:01:52.511495] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa383c0 00:17:37.947 [2024-06-10 19:01:52.511621] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa386f0 00:17:37.947 [2024-06-10 19:01:52.511631] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa386f0 00:17:37.947 [2024-06-10 19:01:52.511780] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.947 BaseBdev3 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:37.947 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:38.205 [ 00:17:38.205 { 00:17:38.205 "name": "BaseBdev3", 00:17:38.205 "aliases": [ 00:17:38.205 "5b505467-7d58-4746-bbec-7cdcbb8fbb7e" 00:17:38.205 ], 00:17:38.205 "product_name": "Malloc disk", 00:17:38.205 "block_size": 512, 00:17:38.205 "num_blocks": 65536, 00:17:38.205 "uuid": "5b505467-7d58-4746-bbec-7cdcbb8fbb7e", 00:17:38.205 "assigned_rate_limits": { 00:17:38.205 "rw_ios_per_sec": 0, 00:17:38.205 "rw_mbytes_per_sec": 0, 00:17:38.205 "r_mbytes_per_sec": 0, 00:17:38.205 "w_mbytes_per_sec": 0 00:17:38.205 }, 00:17:38.205 "claimed": true, 00:17:38.205 "claim_type": "exclusive_write", 00:17:38.205 "zoned": false, 00:17:38.205 "supported_io_types": { 00:17:38.205 "read": true, 00:17:38.205 "write": true, 00:17:38.205 "unmap": true, 00:17:38.205 "write_zeroes": true, 00:17:38.205 "flush": true, 00:17:38.205 "reset": true, 00:17:38.205 "compare": false, 00:17:38.205 "compare_and_write": false, 00:17:38.205 "abort": true, 00:17:38.205 "nvme_admin": false, 00:17:38.205 "nvme_io": false 00:17:38.205 }, 00:17:38.205 "memory_domains": [ 00:17:38.205 { 00:17:38.205 "dma_device_id": "system", 00:17:38.205 "dma_device_type": 1 00:17:38.205 }, 00:17:38.205 { 00:17:38.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.205 "dma_device_type": 2 00:17:38.205 } 00:17:38.205 ], 00:17:38.205 "driver_specific": {} 00:17:38.205 } 00:17:38.205 ] 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.205 19:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.464 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.464 "name": "Existed_Raid", 00:17:38.464 "uuid": "f24b773f-4437-48d0-8286-6416057de37e", 00:17:38.464 "strip_size_kb": 0, 00:17:38.464 "state": "online", 00:17:38.464 "raid_level": "raid1", 00:17:38.464 "superblock": false, 00:17:38.464 "num_base_bdevs": 3, 00:17:38.464 "num_base_bdevs_discovered": 3, 00:17:38.464 "num_base_bdevs_operational": 3, 00:17:38.464 "base_bdevs_list": [ 00:17:38.464 { 00:17:38.464 "name": "BaseBdev1", 00:17:38.464 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:38.464 "is_configured": true, 00:17:38.464 "data_offset": 0, 00:17:38.464 "data_size": 65536 00:17:38.464 }, 00:17:38.464 { 00:17:38.464 "name": "BaseBdev2", 00:17:38.464 "uuid": "e888824a-7371-4f06-8efe-30aa97fcbd19", 00:17:38.464 "is_configured": true, 00:17:38.464 "data_offset": 0, 00:17:38.464 "data_size": 65536 00:17:38.464 }, 00:17:38.464 { 00:17:38.464 "name": "BaseBdev3", 00:17:38.464 "uuid": "5b505467-7d58-4746-bbec-7cdcbb8fbb7e", 00:17:38.464 "is_configured": true, 00:17:38.464 "data_offset": 0, 00:17:38.464 "data_size": 65536 00:17:38.464 } 00:17:38.464 ] 00:17:38.464 }' 00:17:38.464 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.464 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:39.030 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:39.289 [2024-06-10 19:01:53.847045] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.289 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:39.289 "name": "Existed_Raid", 00:17:39.289 "aliases": [ 00:17:39.289 "f24b773f-4437-48d0-8286-6416057de37e" 00:17:39.289 ], 00:17:39.289 "product_name": "Raid Volume", 00:17:39.289 "block_size": 512, 00:17:39.289 "num_blocks": 65536, 00:17:39.289 "uuid": "f24b773f-4437-48d0-8286-6416057de37e", 00:17:39.289 "assigned_rate_limits": { 00:17:39.289 "rw_ios_per_sec": 0, 00:17:39.289 "rw_mbytes_per_sec": 0, 00:17:39.289 "r_mbytes_per_sec": 0, 00:17:39.289 "w_mbytes_per_sec": 0 00:17:39.289 }, 00:17:39.289 "claimed": false, 00:17:39.289 "zoned": false, 00:17:39.289 "supported_io_types": { 00:17:39.289 "read": true, 00:17:39.289 "write": true, 00:17:39.289 "unmap": false, 00:17:39.289 "write_zeroes": true, 00:17:39.289 "flush": false, 00:17:39.289 "reset": true, 00:17:39.289 "compare": false, 00:17:39.289 "compare_and_write": false, 00:17:39.289 "abort": false, 00:17:39.289 "nvme_admin": false, 00:17:39.289 "nvme_io": false 00:17:39.289 }, 00:17:39.289 "memory_domains": [ 00:17:39.289 { 00:17:39.289 "dma_device_id": "system", 00:17:39.289 "dma_device_type": 1 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.289 "dma_device_type": 2 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "dma_device_id": "system", 00:17:39.289 "dma_device_type": 1 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.289 "dma_device_type": 2 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "dma_device_id": "system", 00:17:39.289 "dma_device_type": 1 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.289 "dma_device_type": 2 00:17:39.289 } 00:17:39.289 ], 00:17:39.289 "driver_specific": { 00:17:39.289 "raid": { 00:17:39.289 "uuid": "f24b773f-4437-48d0-8286-6416057de37e", 00:17:39.289 "strip_size_kb": 0, 00:17:39.289 "state": "online", 00:17:39.289 "raid_level": "raid1", 00:17:39.289 "superblock": false, 00:17:39.289 "num_base_bdevs": 3, 00:17:39.289 "num_base_bdevs_discovered": 3, 00:17:39.289 "num_base_bdevs_operational": 3, 00:17:39.289 "base_bdevs_list": [ 00:17:39.289 { 00:17:39.289 "name": "BaseBdev1", 00:17:39.289 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:39.289 "is_configured": true, 00:17:39.289 "data_offset": 0, 00:17:39.289 "data_size": 65536 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "name": "BaseBdev2", 00:17:39.289 "uuid": "e888824a-7371-4f06-8efe-30aa97fcbd19", 00:17:39.289 "is_configured": true, 00:17:39.289 "data_offset": 0, 00:17:39.289 "data_size": 65536 00:17:39.289 }, 00:17:39.289 { 00:17:39.289 "name": "BaseBdev3", 00:17:39.289 "uuid": "5b505467-7d58-4746-bbec-7cdcbb8fbb7e", 00:17:39.289 "is_configured": true, 00:17:39.289 "data_offset": 0, 00:17:39.289 "data_size": 65536 00:17:39.289 } 00:17:39.289 ] 00:17:39.289 } 00:17:39.289 } 00:17:39.289 }' 00:17:39.289 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.289 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:39.289 BaseBdev2 00:17:39.289 BaseBdev3' 00:17:39.289 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.289 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:39.289 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.548 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.548 "name": "BaseBdev1", 00:17:39.548 "aliases": [ 00:17:39.548 "d68f6c43-383c-4131-bde9-27d305172aaa" 00:17:39.548 ], 00:17:39.548 "product_name": "Malloc disk", 00:17:39.548 "block_size": 512, 00:17:39.548 "num_blocks": 65536, 00:17:39.548 "uuid": "d68f6c43-383c-4131-bde9-27d305172aaa", 00:17:39.548 "assigned_rate_limits": { 00:17:39.548 "rw_ios_per_sec": 0, 00:17:39.548 "rw_mbytes_per_sec": 0, 00:17:39.548 "r_mbytes_per_sec": 0, 00:17:39.548 "w_mbytes_per_sec": 0 00:17:39.548 }, 00:17:39.548 "claimed": true, 00:17:39.548 "claim_type": "exclusive_write", 00:17:39.548 "zoned": false, 00:17:39.548 "supported_io_types": { 00:17:39.548 "read": true, 00:17:39.548 "write": true, 00:17:39.548 "unmap": true, 00:17:39.548 "write_zeroes": true, 00:17:39.548 "flush": true, 00:17:39.548 "reset": true, 00:17:39.548 "compare": false, 00:17:39.548 "compare_and_write": false, 00:17:39.548 "abort": true, 00:17:39.548 "nvme_admin": false, 00:17:39.548 "nvme_io": false 00:17:39.548 }, 00:17:39.548 "memory_domains": [ 00:17:39.548 { 00:17:39.548 "dma_device_id": "system", 00:17:39.548 "dma_device_type": 1 00:17:39.548 }, 00:17:39.548 { 00:17:39.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.548 "dma_device_type": 2 00:17:39.548 } 00:17:39.548 ], 00:17:39.548 "driver_specific": {} 00:17:39.548 }' 00:17:39.548 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.548 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.548 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.548 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.548 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:39.806 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.064 "name": "BaseBdev2", 00:17:40.064 "aliases": [ 00:17:40.064 "e888824a-7371-4f06-8efe-30aa97fcbd19" 00:17:40.064 ], 00:17:40.064 "product_name": "Malloc disk", 00:17:40.064 "block_size": 512, 00:17:40.064 "num_blocks": 65536, 00:17:40.064 "uuid": "e888824a-7371-4f06-8efe-30aa97fcbd19", 00:17:40.064 "assigned_rate_limits": { 00:17:40.064 "rw_ios_per_sec": 0, 00:17:40.064 "rw_mbytes_per_sec": 0, 00:17:40.064 "r_mbytes_per_sec": 0, 00:17:40.064 "w_mbytes_per_sec": 0 00:17:40.064 }, 00:17:40.064 "claimed": true, 00:17:40.064 "claim_type": "exclusive_write", 00:17:40.064 "zoned": false, 00:17:40.064 "supported_io_types": { 00:17:40.064 "read": true, 00:17:40.064 "write": true, 00:17:40.064 "unmap": true, 00:17:40.064 "write_zeroes": true, 00:17:40.064 "flush": true, 00:17:40.064 "reset": true, 00:17:40.064 "compare": false, 00:17:40.064 "compare_and_write": false, 00:17:40.064 "abort": true, 00:17:40.064 "nvme_admin": false, 00:17:40.064 "nvme_io": false 00:17:40.064 }, 00:17:40.064 "memory_domains": [ 00:17:40.064 { 00:17:40.064 "dma_device_id": "system", 00:17:40.064 "dma_device_type": 1 00:17:40.064 }, 00:17:40.064 { 00:17:40.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.064 "dma_device_type": 2 00:17:40.064 } 00:17:40.064 ], 00:17:40.064 "driver_specific": {} 00:17:40.064 }' 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:40.064 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.323 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:40.581 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.581 "name": "BaseBdev3", 00:17:40.581 "aliases": [ 00:17:40.581 "5b505467-7d58-4746-bbec-7cdcbb8fbb7e" 00:17:40.581 ], 00:17:40.581 "product_name": "Malloc disk", 00:17:40.581 "block_size": 512, 00:17:40.581 "num_blocks": 65536, 00:17:40.581 "uuid": "5b505467-7d58-4746-bbec-7cdcbb8fbb7e", 00:17:40.581 "assigned_rate_limits": { 00:17:40.581 "rw_ios_per_sec": 0, 00:17:40.581 "rw_mbytes_per_sec": 0, 00:17:40.581 "r_mbytes_per_sec": 0, 00:17:40.581 "w_mbytes_per_sec": 0 00:17:40.581 }, 00:17:40.581 "claimed": true, 00:17:40.581 "claim_type": "exclusive_write", 00:17:40.581 "zoned": false, 00:17:40.581 "supported_io_types": { 00:17:40.581 "read": true, 00:17:40.581 "write": true, 00:17:40.581 "unmap": true, 00:17:40.581 "write_zeroes": true, 00:17:40.581 "flush": true, 00:17:40.581 "reset": true, 00:17:40.581 "compare": false, 00:17:40.581 "compare_and_write": false, 00:17:40.581 "abort": true, 00:17:40.581 "nvme_admin": false, 00:17:40.581 "nvme_io": false 00:17:40.581 }, 00:17:40.581 "memory_domains": [ 00:17:40.581 { 00:17:40.581 "dma_device_id": "system", 00:17:40.581 "dma_device_type": 1 00:17:40.581 }, 00:17:40.581 { 00:17:40.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.581 "dma_device_type": 2 00:17:40.581 } 00:17:40.581 ], 00:17:40.581 "driver_specific": {} 00:17:40.581 }' 00:17:40.581 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.581 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.581 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.581 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:40.839 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:41.097 [2024-06-10 19:01:55.775917] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.097 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.098 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.356 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.356 "name": "Existed_Raid", 00:17:41.356 "uuid": "f24b773f-4437-48d0-8286-6416057de37e", 00:17:41.356 "strip_size_kb": 0, 00:17:41.356 "state": "online", 00:17:41.356 "raid_level": "raid1", 00:17:41.356 "superblock": false, 00:17:41.356 "num_base_bdevs": 3, 00:17:41.356 "num_base_bdevs_discovered": 2, 00:17:41.356 "num_base_bdevs_operational": 2, 00:17:41.356 "base_bdevs_list": [ 00:17:41.356 { 00:17:41.356 "name": null, 00:17:41.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.356 "is_configured": false, 00:17:41.356 "data_offset": 0, 00:17:41.356 "data_size": 65536 00:17:41.356 }, 00:17:41.356 { 00:17:41.356 "name": "BaseBdev2", 00:17:41.356 "uuid": "e888824a-7371-4f06-8efe-30aa97fcbd19", 00:17:41.356 "is_configured": true, 00:17:41.356 "data_offset": 0, 00:17:41.356 "data_size": 65536 00:17:41.356 }, 00:17:41.356 { 00:17:41.356 "name": "BaseBdev3", 00:17:41.356 "uuid": "5b505467-7d58-4746-bbec-7cdcbb8fbb7e", 00:17:41.356 "is_configured": true, 00:17:41.356 "data_offset": 0, 00:17:41.356 "data_size": 65536 00:17:41.356 } 00:17:41.356 ] 00:17:41.356 }' 00:17:41.356 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.356 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.923 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:41.923 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:41.923 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.923 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:42.182 [2024-06-10 19:01:56.891929] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.182 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:42.441 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:42.441 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.441 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:42.700 [2024-06-10 19:01:57.355383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:42.700 [2024-06-10 19:01:57.355449] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.700 [2024-06-10 19:01:57.365768] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.700 [2024-06-10 19:01:57.365797] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.700 [2024-06-10 19:01:57.365808] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa386f0 name Existed_Raid, state offline 00:17:42.700 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:42.700 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:42.700 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.700 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:42.959 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:42.959 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:42.959 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:42.959 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:42.959 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:42.959 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.218 BaseBdev2 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:43.218 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.477 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.736 [ 00:17:43.736 { 00:17:43.736 "name": "BaseBdev2", 00:17:43.736 "aliases": [ 00:17:43.736 "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60" 00:17:43.736 ], 00:17:43.736 "product_name": "Malloc disk", 00:17:43.736 "block_size": 512, 00:17:43.736 "num_blocks": 65536, 00:17:43.736 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:43.736 "assigned_rate_limits": { 00:17:43.736 "rw_ios_per_sec": 0, 00:17:43.736 "rw_mbytes_per_sec": 0, 00:17:43.736 "r_mbytes_per_sec": 0, 00:17:43.736 "w_mbytes_per_sec": 0 00:17:43.736 }, 00:17:43.736 "claimed": false, 00:17:43.736 "zoned": false, 00:17:43.736 "supported_io_types": { 00:17:43.736 "read": true, 00:17:43.736 "write": true, 00:17:43.736 "unmap": true, 00:17:43.736 "write_zeroes": true, 00:17:43.736 "flush": true, 00:17:43.736 "reset": true, 00:17:43.736 "compare": false, 00:17:43.736 "compare_and_write": false, 00:17:43.736 "abort": true, 00:17:43.736 "nvme_admin": false, 00:17:43.736 "nvme_io": false 00:17:43.736 }, 00:17:43.736 "memory_domains": [ 00:17:43.736 { 00:17:43.736 "dma_device_id": "system", 00:17:43.736 "dma_device_type": 1 00:17:43.736 }, 00:17:43.736 { 00:17:43.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.736 "dma_device_type": 2 00:17:43.736 } 00:17:43.736 ], 00:17:43.736 "driver_specific": {} 00:17:43.736 } 00:17:43.736 ] 00:17:43.736 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:43.736 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:43.736 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:43.736 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:43.736 BaseBdev3 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.995 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:44.254 [ 00:17:44.254 { 00:17:44.254 "name": "BaseBdev3", 00:17:44.254 "aliases": [ 00:17:44.254 "99f52090-12c7-49b2-b2da-99e134e0e62f" 00:17:44.254 ], 00:17:44.254 "product_name": "Malloc disk", 00:17:44.254 "block_size": 512, 00:17:44.254 "num_blocks": 65536, 00:17:44.254 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:44.254 "assigned_rate_limits": { 00:17:44.254 "rw_ios_per_sec": 0, 00:17:44.254 "rw_mbytes_per_sec": 0, 00:17:44.254 "r_mbytes_per_sec": 0, 00:17:44.254 "w_mbytes_per_sec": 0 00:17:44.254 }, 00:17:44.254 "claimed": false, 00:17:44.254 "zoned": false, 00:17:44.254 "supported_io_types": { 00:17:44.254 "read": true, 00:17:44.254 "write": true, 00:17:44.254 "unmap": true, 00:17:44.254 "write_zeroes": true, 00:17:44.254 "flush": true, 00:17:44.254 "reset": true, 00:17:44.254 "compare": false, 00:17:44.254 "compare_and_write": false, 00:17:44.254 "abort": true, 00:17:44.254 "nvme_admin": false, 00:17:44.254 "nvme_io": false 00:17:44.254 }, 00:17:44.254 "memory_domains": [ 00:17:44.254 { 00:17:44.254 "dma_device_id": "system", 00:17:44.254 "dma_device_type": 1 00:17:44.254 }, 00:17:44.254 { 00:17:44.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.254 "dma_device_type": 2 00:17:44.254 } 00:17:44.254 ], 00:17:44.254 "driver_specific": {} 00:17:44.254 } 00:17:44.254 ] 00:17:44.254 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:44.254 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:44.254 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:44.254 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:44.513 [2024-06-10 19:01:59.162642] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.513 [2024-06-10 19:01:59.162680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.513 [2024-06-10 19:01:59.162697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.513 [2024-06-10 19:01:59.163922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.513 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.772 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.772 "name": "Existed_Raid", 00:17:44.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.772 "strip_size_kb": 0, 00:17:44.772 "state": "configuring", 00:17:44.772 "raid_level": "raid1", 00:17:44.772 "superblock": false, 00:17:44.772 "num_base_bdevs": 3, 00:17:44.772 "num_base_bdevs_discovered": 2, 00:17:44.772 "num_base_bdevs_operational": 3, 00:17:44.772 "base_bdevs_list": [ 00:17:44.772 { 00:17:44.772 "name": "BaseBdev1", 00:17:44.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.772 "is_configured": false, 00:17:44.772 "data_offset": 0, 00:17:44.772 "data_size": 0 00:17:44.772 }, 00:17:44.772 { 00:17:44.772 "name": "BaseBdev2", 00:17:44.772 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:44.772 "is_configured": true, 00:17:44.772 "data_offset": 0, 00:17:44.772 "data_size": 65536 00:17:44.772 }, 00:17:44.772 { 00:17:44.772 "name": "BaseBdev3", 00:17:44.772 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:44.772 "is_configured": true, 00:17:44.772 "data_offset": 0, 00:17:44.772 "data_size": 65536 00:17:44.772 } 00:17:44.772 ] 00:17:44.772 }' 00:17:44.772 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.772 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.339 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:45.597 [2024-06-10 19:02:00.173288] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.597 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.856 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.856 "name": "Existed_Raid", 00:17:45.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.856 "strip_size_kb": 0, 00:17:45.856 "state": "configuring", 00:17:45.856 "raid_level": "raid1", 00:17:45.856 "superblock": false, 00:17:45.856 "num_base_bdevs": 3, 00:17:45.856 "num_base_bdevs_discovered": 1, 00:17:45.856 "num_base_bdevs_operational": 3, 00:17:45.856 "base_bdevs_list": [ 00:17:45.856 { 00:17:45.856 "name": "BaseBdev1", 00:17:45.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.856 "is_configured": false, 00:17:45.856 "data_offset": 0, 00:17:45.856 "data_size": 0 00:17:45.856 }, 00:17:45.856 { 00:17:45.856 "name": null, 00:17:45.856 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:45.856 "is_configured": false, 00:17:45.856 "data_offset": 0, 00:17:45.856 "data_size": 65536 00:17:45.856 }, 00:17:45.856 { 00:17:45.856 "name": "BaseBdev3", 00:17:45.856 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:45.856 "is_configured": true, 00:17:45.856 "data_offset": 0, 00:17:45.856 "data_size": 65536 00:17:45.856 } 00:17:45.856 ] 00:17:45.856 }' 00:17:45.856 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.856 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.423 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.423 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:46.682 [2024-06-10 19:02:01.419789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.682 BaseBdev1 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:46.682 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.942 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:47.201 [ 00:17:47.201 { 00:17:47.201 "name": "BaseBdev1", 00:17:47.201 "aliases": [ 00:17:47.201 "06deca3a-7ef2-42b2-a2ec-c7f172570319" 00:17:47.201 ], 00:17:47.201 "product_name": "Malloc disk", 00:17:47.201 "block_size": 512, 00:17:47.201 "num_blocks": 65536, 00:17:47.201 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:47.201 "assigned_rate_limits": { 00:17:47.201 "rw_ios_per_sec": 0, 00:17:47.201 "rw_mbytes_per_sec": 0, 00:17:47.201 "r_mbytes_per_sec": 0, 00:17:47.201 "w_mbytes_per_sec": 0 00:17:47.201 }, 00:17:47.201 "claimed": true, 00:17:47.201 "claim_type": "exclusive_write", 00:17:47.201 "zoned": false, 00:17:47.201 "supported_io_types": { 00:17:47.201 "read": true, 00:17:47.201 "write": true, 00:17:47.201 "unmap": true, 00:17:47.201 "write_zeroes": true, 00:17:47.201 "flush": true, 00:17:47.201 "reset": true, 00:17:47.201 "compare": false, 00:17:47.201 "compare_and_write": false, 00:17:47.201 "abort": true, 00:17:47.201 "nvme_admin": false, 00:17:47.201 "nvme_io": false 00:17:47.201 }, 00:17:47.201 "memory_domains": [ 00:17:47.201 { 00:17:47.201 "dma_device_id": "system", 00:17:47.201 "dma_device_type": 1 00:17:47.201 }, 00:17:47.201 { 00:17:47.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.201 "dma_device_type": 2 00:17:47.201 } 00:17:47.201 ], 00:17:47.201 "driver_specific": {} 00:17:47.201 } 00:17:47.201 ] 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.201 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.461 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.461 "name": "Existed_Raid", 00:17:47.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.461 "strip_size_kb": 0, 00:17:47.461 "state": "configuring", 00:17:47.461 "raid_level": "raid1", 00:17:47.461 "superblock": false, 00:17:47.461 "num_base_bdevs": 3, 00:17:47.461 "num_base_bdevs_discovered": 2, 00:17:47.461 "num_base_bdevs_operational": 3, 00:17:47.461 "base_bdevs_list": [ 00:17:47.461 { 00:17:47.461 "name": "BaseBdev1", 00:17:47.461 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:47.461 "is_configured": true, 00:17:47.461 "data_offset": 0, 00:17:47.461 "data_size": 65536 00:17:47.461 }, 00:17:47.461 { 00:17:47.461 "name": null, 00:17:47.461 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:47.461 "is_configured": false, 00:17:47.461 "data_offset": 0, 00:17:47.461 "data_size": 65536 00:17:47.461 }, 00:17:47.461 { 00:17:47.461 "name": "BaseBdev3", 00:17:47.461 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:47.461 "is_configured": true, 00:17:47.461 "data_offset": 0, 00:17:47.461 "data_size": 65536 00:17:47.461 } 00:17:47.461 ] 00:17:47.461 }' 00:17:47.461 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.461 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.027 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.027 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:48.287 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:48.287 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:48.546 [2024-06-10 19:02:03.084205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.546 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.806 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.806 "name": "Existed_Raid", 00:17:48.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.806 "strip_size_kb": 0, 00:17:48.806 "state": "configuring", 00:17:48.806 "raid_level": "raid1", 00:17:48.806 "superblock": false, 00:17:48.806 "num_base_bdevs": 3, 00:17:48.806 "num_base_bdevs_discovered": 1, 00:17:48.806 "num_base_bdevs_operational": 3, 00:17:48.806 "base_bdevs_list": [ 00:17:48.806 { 00:17:48.806 "name": "BaseBdev1", 00:17:48.806 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:48.806 "is_configured": true, 00:17:48.806 "data_offset": 0, 00:17:48.806 "data_size": 65536 00:17:48.806 }, 00:17:48.806 { 00:17:48.806 "name": null, 00:17:48.806 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:48.806 "is_configured": false, 00:17:48.806 "data_offset": 0, 00:17:48.806 "data_size": 65536 00:17:48.806 }, 00:17:48.806 { 00:17:48.806 "name": null, 00:17:48.806 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:48.806 "is_configured": false, 00:17:48.806 "data_offset": 0, 00:17:48.806 "data_size": 65536 00:17:48.806 } 00:17:48.806 ] 00:17:48.806 }' 00:17:48.806 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.806 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.374 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.374 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:49.633 [2024-06-10 19:02:04.339537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.633 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.634 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.634 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.634 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.893 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.893 "name": "Existed_Raid", 00:17:49.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.893 "strip_size_kb": 0, 00:17:49.893 "state": "configuring", 00:17:49.893 "raid_level": "raid1", 00:17:49.893 "superblock": false, 00:17:49.893 "num_base_bdevs": 3, 00:17:49.893 "num_base_bdevs_discovered": 2, 00:17:49.893 "num_base_bdevs_operational": 3, 00:17:49.893 "base_bdevs_list": [ 00:17:49.893 { 00:17:49.893 "name": "BaseBdev1", 00:17:49.893 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:49.893 "is_configured": true, 00:17:49.893 "data_offset": 0, 00:17:49.893 "data_size": 65536 00:17:49.893 }, 00:17:49.893 { 00:17:49.893 "name": null, 00:17:49.893 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:49.893 "is_configured": false, 00:17:49.893 "data_offset": 0, 00:17:49.893 "data_size": 65536 00:17:49.893 }, 00:17:49.893 { 00:17:49.893 "name": "BaseBdev3", 00:17:49.893 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:49.893 "is_configured": true, 00:17:49.893 "data_offset": 0, 00:17:49.893 "data_size": 65536 00:17:49.893 } 00:17:49.893 ] 00:17:49.893 }' 00:17:49.893 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.893 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.462 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.462 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.721 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:50.721 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:50.981 [2024-06-10 19:02:05.546726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.981 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.240 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.240 "name": "Existed_Raid", 00:17:51.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.240 "strip_size_kb": 0, 00:17:51.240 "state": "configuring", 00:17:51.240 "raid_level": "raid1", 00:17:51.240 "superblock": false, 00:17:51.240 "num_base_bdevs": 3, 00:17:51.240 "num_base_bdevs_discovered": 1, 00:17:51.240 "num_base_bdevs_operational": 3, 00:17:51.240 "base_bdevs_list": [ 00:17:51.240 { 00:17:51.240 "name": null, 00:17:51.240 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:51.240 "is_configured": false, 00:17:51.240 "data_offset": 0, 00:17:51.240 "data_size": 65536 00:17:51.240 }, 00:17:51.240 { 00:17:51.240 "name": null, 00:17:51.240 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:51.240 "is_configured": false, 00:17:51.240 "data_offset": 0, 00:17:51.240 "data_size": 65536 00:17:51.240 }, 00:17:51.240 { 00:17:51.240 "name": "BaseBdev3", 00:17:51.240 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:51.240 "is_configured": true, 00:17:51.240 "data_offset": 0, 00:17:51.240 "data_size": 65536 00:17:51.240 } 00:17:51.240 ] 00:17:51.240 }' 00:17:51.240 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.240 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.809 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.809 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:52.068 [2024-06-10 19:02:06.803992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.068 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.327 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.327 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.327 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.327 "name": "Existed_Raid", 00:17:52.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.327 "strip_size_kb": 0, 00:17:52.327 "state": "configuring", 00:17:52.327 "raid_level": "raid1", 00:17:52.327 "superblock": false, 00:17:52.327 "num_base_bdevs": 3, 00:17:52.327 "num_base_bdevs_discovered": 2, 00:17:52.327 "num_base_bdevs_operational": 3, 00:17:52.327 "base_bdevs_list": [ 00:17:52.327 { 00:17:52.327 "name": null, 00:17:52.327 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:52.327 "is_configured": false, 00:17:52.327 "data_offset": 0, 00:17:52.327 "data_size": 65536 00:17:52.327 }, 00:17:52.327 { 00:17:52.327 "name": "BaseBdev2", 00:17:52.327 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:52.327 "is_configured": true, 00:17:52.327 "data_offset": 0, 00:17:52.327 "data_size": 65536 00:17:52.327 }, 00:17:52.327 { 00:17:52.327 "name": "BaseBdev3", 00:17:52.327 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:52.327 "is_configured": true, 00:17:52.327 "data_offset": 0, 00:17:52.327 "data_size": 65536 00:17:52.327 } 00:17:52.327 ] 00:17:52.327 }' 00:17:52.327 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.327 19:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.896 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.896 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:53.155 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:53.155 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.155 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:53.414 19:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 06deca3a-7ef2-42b2-a2ec-c7f172570319 00:17:53.673 [2024-06-10 19:02:08.182789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:53.673 [2024-06-10 19:02:08.182821] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa2e970 00:17:53.673 [2024-06-10 19:02:08.182829] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:53.673 [2024-06-10 19:02:08.183007] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa39d90 00:17:53.673 [2024-06-10 19:02:08.183115] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa2e970 00:17:53.673 [2024-06-10 19:02:08.183124] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa2e970 00:17:53.673 [2024-06-10 19:02:08.183269] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.673 NewBaseBdev 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.673 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:53.933 [ 00:17:53.933 { 00:17:53.933 "name": "NewBaseBdev", 00:17:53.933 "aliases": [ 00:17:53.933 "06deca3a-7ef2-42b2-a2ec-c7f172570319" 00:17:53.933 ], 00:17:53.933 "product_name": "Malloc disk", 00:17:53.933 "block_size": 512, 00:17:53.933 "num_blocks": 65536, 00:17:53.933 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:53.933 "assigned_rate_limits": { 00:17:53.933 "rw_ios_per_sec": 0, 00:17:53.933 "rw_mbytes_per_sec": 0, 00:17:53.933 "r_mbytes_per_sec": 0, 00:17:53.933 "w_mbytes_per_sec": 0 00:17:53.933 }, 00:17:53.933 "claimed": true, 00:17:53.933 "claim_type": "exclusive_write", 00:17:53.933 "zoned": false, 00:17:53.933 "supported_io_types": { 00:17:53.933 "read": true, 00:17:53.933 "write": true, 00:17:53.933 "unmap": true, 00:17:53.933 "write_zeroes": true, 00:17:53.933 "flush": true, 00:17:53.933 "reset": true, 00:17:53.933 "compare": false, 00:17:53.933 "compare_and_write": false, 00:17:53.933 "abort": true, 00:17:53.933 "nvme_admin": false, 00:17:53.933 "nvme_io": false 00:17:53.933 }, 00:17:53.933 "memory_domains": [ 00:17:53.933 { 00:17:53.933 "dma_device_id": "system", 00:17:53.933 "dma_device_type": 1 00:17:53.933 }, 00:17:53.933 { 00:17:53.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.933 "dma_device_type": 2 00:17:53.933 } 00:17:53.933 ], 00:17:53.933 "driver_specific": {} 00:17:53.933 } 00:17:53.933 ] 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.933 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.216 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.216 "name": "Existed_Raid", 00:17:54.216 "uuid": "36f6b86c-8de9-41bb-9455-b96f34d6b5ae", 00:17:54.216 "strip_size_kb": 0, 00:17:54.216 "state": "online", 00:17:54.216 "raid_level": "raid1", 00:17:54.216 "superblock": false, 00:17:54.216 "num_base_bdevs": 3, 00:17:54.216 "num_base_bdevs_discovered": 3, 00:17:54.216 "num_base_bdevs_operational": 3, 00:17:54.216 "base_bdevs_list": [ 00:17:54.216 { 00:17:54.216 "name": "NewBaseBdev", 00:17:54.216 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:54.216 "is_configured": true, 00:17:54.216 "data_offset": 0, 00:17:54.216 "data_size": 65536 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "name": "BaseBdev2", 00:17:54.216 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:54.216 "is_configured": true, 00:17:54.216 "data_offset": 0, 00:17:54.216 "data_size": 65536 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "name": "BaseBdev3", 00:17:54.216 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:54.216 "is_configured": true, 00:17:54.216 "data_offset": 0, 00:17:54.216 "data_size": 65536 00:17:54.216 } 00:17:54.216 ] 00:17:54.216 }' 00:17:54.216 19:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.216 19:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:54.844 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:55.103 [2024-06-10 19:02:09.658937] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.104 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:55.104 "name": "Existed_Raid", 00:17:55.104 "aliases": [ 00:17:55.104 "36f6b86c-8de9-41bb-9455-b96f34d6b5ae" 00:17:55.104 ], 00:17:55.104 "product_name": "Raid Volume", 00:17:55.104 "block_size": 512, 00:17:55.104 "num_blocks": 65536, 00:17:55.104 "uuid": "36f6b86c-8de9-41bb-9455-b96f34d6b5ae", 00:17:55.104 "assigned_rate_limits": { 00:17:55.104 "rw_ios_per_sec": 0, 00:17:55.104 "rw_mbytes_per_sec": 0, 00:17:55.104 "r_mbytes_per_sec": 0, 00:17:55.104 "w_mbytes_per_sec": 0 00:17:55.104 }, 00:17:55.104 "claimed": false, 00:17:55.104 "zoned": false, 00:17:55.104 "supported_io_types": { 00:17:55.104 "read": true, 00:17:55.104 "write": true, 00:17:55.104 "unmap": false, 00:17:55.104 "write_zeroes": true, 00:17:55.104 "flush": false, 00:17:55.104 "reset": true, 00:17:55.104 "compare": false, 00:17:55.104 "compare_and_write": false, 00:17:55.104 "abort": false, 00:17:55.104 "nvme_admin": false, 00:17:55.104 "nvme_io": false 00:17:55.104 }, 00:17:55.104 "memory_domains": [ 00:17:55.104 { 00:17:55.104 "dma_device_id": "system", 00:17:55.104 "dma_device_type": 1 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.104 "dma_device_type": 2 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "dma_device_id": "system", 00:17:55.104 "dma_device_type": 1 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.104 "dma_device_type": 2 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "dma_device_id": "system", 00:17:55.104 "dma_device_type": 1 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.104 "dma_device_type": 2 00:17:55.104 } 00:17:55.104 ], 00:17:55.104 "driver_specific": { 00:17:55.104 "raid": { 00:17:55.104 "uuid": "36f6b86c-8de9-41bb-9455-b96f34d6b5ae", 00:17:55.104 "strip_size_kb": 0, 00:17:55.104 "state": "online", 00:17:55.104 "raid_level": "raid1", 00:17:55.104 "superblock": false, 00:17:55.104 "num_base_bdevs": 3, 00:17:55.104 "num_base_bdevs_discovered": 3, 00:17:55.104 "num_base_bdevs_operational": 3, 00:17:55.104 "base_bdevs_list": [ 00:17:55.104 { 00:17:55.104 "name": "NewBaseBdev", 00:17:55.104 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:55.104 "is_configured": true, 00:17:55.104 "data_offset": 0, 00:17:55.104 "data_size": 65536 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "name": "BaseBdev2", 00:17:55.104 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:55.104 "is_configured": true, 00:17:55.104 "data_offset": 0, 00:17:55.104 "data_size": 65536 00:17:55.104 }, 00:17:55.104 { 00:17:55.104 "name": "BaseBdev3", 00:17:55.104 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:55.104 "is_configured": true, 00:17:55.104 "data_offset": 0, 00:17:55.104 "data_size": 65536 00:17:55.104 } 00:17:55.104 ] 00:17:55.104 } 00:17:55.104 } 00:17:55.104 }' 00:17:55.104 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.104 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:55.104 BaseBdev2 00:17:55.104 BaseBdev3' 00:17:55.104 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.104 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:55.104 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.365 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.365 "name": "NewBaseBdev", 00:17:55.365 "aliases": [ 00:17:55.365 "06deca3a-7ef2-42b2-a2ec-c7f172570319" 00:17:55.365 ], 00:17:55.365 "product_name": "Malloc disk", 00:17:55.365 "block_size": 512, 00:17:55.365 "num_blocks": 65536, 00:17:55.365 "uuid": "06deca3a-7ef2-42b2-a2ec-c7f172570319", 00:17:55.365 "assigned_rate_limits": { 00:17:55.365 "rw_ios_per_sec": 0, 00:17:55.365 "rw_mbytes_per_sec": 0, 00:17:55.365 "r_mbytes_per_sec": 0, 00:17:55.365 "w_mbytes_per_sec": 0 00:17:55.365 }, 00:17:55.365 "claimed": true, 00:17:55.365 "claim_type": "exclusive_write", 00:17:55.365 "zoned": false, 00:17:55.365 "supported_io_types": { 00:17:55.365 "read": true, 00:17:55.365 "write": true, 00:17:55.365 "unmap": true, 00:17:55.365 "write_zeroes": true, 00:17:55.365 "flush": true, 00:17:55.365 "reset": true, 00:17:55.365 "compare": false, 00:17:55.365 "compare_and_write": false, 00:17:55.365 "abort": true, 00:17:55.365 "nvme_admin": false, 00:17:55.365 "nvme_io": false 00:17:55.365 }, 00:17:55.365 "memory_domains": [ 00:17:55.365 { 00:17:55.365 "dma_device_id": "system", 00:17:55.365 "dma_device_type": 1 00:17:55.365 }, 00:17:55.365 { 00:17:55.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.365 "dma_device_type": 2 00:17:55.365 } 00:17:55.365 ], 00:17:55.365 "driver_specific": {} 00:17:55.365 }' 00:17:55.365 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.365 19:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.365 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:55.365 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.365 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:55.624 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.883 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.883 "name": "BaseBdev2", 00:17:55.883 "aliases": [ 00:17:55.883 "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60" 00:17:55.883 ], 00:17:55.883 "product_name": "Malloc disk", 00:17:55.883 "block_size": 512, 00:17:55.883 "num_blocks": 65536, 00:17:55.883 "uuid": "9ba483ec-d6b0-46d6-ac64-d32e9ffc7c60", 00:17:55.883 "assigned_rate_limits": { 00:17:55.883 "rw_ios_per_sec": 0, 00:17:55.883 "rw_mbytes_per_sec": 0, 00:17:55.883 "r_mbytes_per_sec": 0, 00:17:55.883 "w_mbytes_per_sec": 0 00:17:55.883 }, 00:17:55.884 "claimed": true, 00:17:55.884 "claim_type": "exclusive_write", 00:17:55.884 "zoned": false, 00:17:55.884 "supported_io_types": { 00:17:55.884 "read": true, 00:17:55.884 "write": true, 00:17:55.884 "unmap": true, 00:17:55.884 "write_zeroes": true, 00:17:55.884 "flush": true, 00:17:55.884 "reset": true, 00:17:55.884 "compare": false, 00:17:55.884 "compare_and_write": false, 00:17:55.884 "abort": true, 00:17:55.884 "nvme_admin": false, 00:17:55.884 "nvme_io": false 00:17:55.884 }, 00:17:55.884 "memory_domains": [ 00:17:55.884 { 00:17:55.884 "dma_device_id": "system", 00:17:55.884 "dma_device_type": 1 00:17:55.884 }, 00:17:55.884 { 00:17:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.884 "dma_device_type": 2 00:17:55.884 } 00:17:55.884 ], 00:17:55.884 "driver_specific": {} 00:17:55.884 }' 00:17:55.884 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.884 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.884 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:55.884 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:56.143 19:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.403 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.403 "name": "BaseBdev3", 00:17:56.403 "aliases": [ 00:17:56.403 "99f52090-12c7-49b2-b2da-99e134e0e62f" 00:17:56.403 ], 00:17:56.403 "product_name": "Malloc disk", 00:17:56.403 "block_size": 512, 00:17:56.403 "num_blocks": 65536, 00:17:56.403 "uuid": "99f52090-12c7-49b2-b2da-99e134e0e62f", 00:17:56.403 "assigned_rate_limits": { 00:17:56.403 "rw_ios_per_sec": 0, 00:17:56.403 "rw_mbytes_per_sec": 0, 00:17:56.403 "r_mbytes_per_sec": 0, 00:17:56.403 "w_mbytes_per_sec": 0 00:17:56.403 }, 00:17:56.403 "claimed": true, 00:17:56.403 "claim_type": "exclusive_write", 00:17:56.403 "zoned": false, 00:17:56.403 "supported_io_types": { 00:17:56.403 "read": true, 00:17:56.403 "write": true, 00:17:56.403 "unmap": true, 00:17:56.403 "write_zeroes": true, 00:17:56.403 "flush": true, 00:17:56.403 "reset": true, 00:17:56.403 "compare": false, 00:17:56.403 "compare_and_write": false, 00:17:56.403 "abort": true, 00:17:56.403 "nvme_admin": false, 00:17:56.403 "nvme_io": false 00:17:56.403 }, 00:17:56.403 "memory_domains": [ 00:17:56.403 { 00:17:56.403 "dma_device_id": "system", 00:17:56.403 "dma_device_type": 1 00:17:56.403 }, 00:17:56.403 { 00:17:56.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.403 "dma_device_type": 2 00:17:56.403 } 00:17:56.403 ], 00:17:56.403 "driver_specific": {} 00:17:56.403 }' 00:17:56.403 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.403 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.661 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:56.924 [2024-06-10 19:02:11.631912] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.924 [2024-06-10 19:02:11.631934] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.924 [2024-06-10 19:02:11.631977] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.924 [2024-06-10 19:02:11.632212] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.924 [2024-06-10 19:02:11.632224] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa2e970 name Existed_Raid, state offline 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1673993 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1673993 ']' 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1673993 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:56.924 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1673993 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1673993' 00:17:57.190 killing process with pid 1673993 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1673993 00:17:57.190 [2024-06-10 19:02:11.705029] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1673993 00:17:57.190 [2024-06-10 19:02:11.728690] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:57.190 00:17:57.190 real 0m26.217s 00:17:57.190 user 0m48.089s 00:17:57.190 sys 0m4.764s 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:57.190 19:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.190 ************************************ 00:17:57.190 END TEST raid_state_function_test 00:17:57.190 ************************************ 00:17:57.450 19:02:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:57.450 19:02:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:57.450 19:02:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:57.450 19:02:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.450 ************************************ 00:17:57.450 START TEST raid_state_function_test_sb 00:17:57.450 ************************************ 00:17:57.450 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 3 true 00:17:57.450 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:57.450 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1679131 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1679131' 00:17:57.451 Process raid pid: 1679131 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1679131 /var/tmp/spdk-raid.sock 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1679131 ']' 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:57.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:57.451 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.451 [2024-06-10 19:02:12.069637] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:17:57.451 [2024-06-10 19:02:12.069691] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.0 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.1 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.2 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.3 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.4 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.5 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.6 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:01.7 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.0 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.1 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.2 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.3 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.4 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.5 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.6 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b6:02.7 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.0 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.1 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.2 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.3 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.4 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.5 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.6 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:01.7 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.0 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.1 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.2 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.3 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.4 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.5 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.6 cannot be used 00:17:57.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:57.451 EAL: Requested device 0000:b8:02.7 cannot be used 00:17:57.451 [2024-06-10 19:02:12.200542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.710 [2024-06-10 19:02:12.286664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.710 [2024-06-10 19:02:12.348317] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.710 [2024-06-10 19:02:12.348353] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.279 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:58.279 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:17:58.279 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:58.538 [2024-06-10 19:02:13.163402] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.538 [2024-06-10 19:02:13.163438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.538 [2024-06-10 19:02:13.163448] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.538 [2024-06-10 19:02:13.163459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.538 [2024-06-10 19:02:13.163467] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:58.538 [2024-06-10 19:02:13.163477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.538 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.539 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.539 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.798 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.798 "name": "Existed_Raid", 00:17:58.798 "uuid": "f30ca4bf-dcb9-4708-bf35-6c12ea3bb818", 00:17:58.798 "strip_size_kb": 0, 00:17:58.798 "state": "configuring", 00:17:58.798 "raid_level": "raid1", 00:17:58.798 "superblock": true, 00:17:58.798 "num_base_bdevs": 3, 00:17:58.798 "num_base_bdevs_discovered": 0, 00:17:58.798 "num_base_bdevs_operational": 3, 00:17:58.798 "base_bdevs_list": [ 00:17:58.798 { 00:17:58.798 "name": "BaseBdev1", 00:17:58.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.798 "is_configured": false, 00:17:58.798 "data_offset": 0, 00:17:58.798 "data_size": 0 00:17:58.798 }, 00:17:58.798 { 00:17:58.798 "name": "BaseBdev2", 00:17:58.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.798 "is_configured": false, 00:17:58.798 "data_offset": 0, 00:17:58.798 "data_size": 0 00:17:58.798 }, 00:17:58.798 { 00:17:58.798 "name": "BaseBdev3", 00:17:58.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.798 "is_configured": false, 00:17:58.798 "data_offset": 0, 00:17:58.798 "data_size": 0 00:17:58.798 } 00:17:58.798 ] 00:17:58.798 }' 00:17:58.798 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.798 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.366 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:59.625 [2024-06-10 19:02:14.193988] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.625 [2024-06-10 19:02:14.194013] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x250ef30 name Existed_Raid, state configuring 00:17:59.625 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:59.884 [2024-06-10 19:02:14.422601] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.884 [2024-06-10 19:02:14.422624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.884 [2024-06-10 19:02:14.422633] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.884 [2024-06-10 19:02:14.422644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.884 [2024-06-10 19:02:14.422652] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:59.884 [2024-06-10 19:02:14.422662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:59.884 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:00.143 [2024-06-10 19:02:14.660663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.143 BaseBdev1 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:00.143 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.403 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.403 [ 00:18:00.403 { 00:18:00.403 "name": "BaseBdev1", 00:18:00.403 "aliases": [ 00:18:00.404 "1d3668e8-df92-46b5-abdc-b0e3ac7c970f" 00:18:00.404 ], 00:18:00.404 "product_name": "Malloc disk", 00:18:00.404 "block_size": 512, 00:18:00.404 "num_blocks": 65536, 00:18:00.404 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:00.404 "assigned_rate_limits": { 00:18:00.404 "rw_ios_per_sec": 0, 00:18:00.404 "rw_mbytes_per_sec": 0, 00:18:00.404 "r_mbytes_per_sec": 0, 00:18:00.404 "w_mbytes_per_sec": 0 00:18:00.404 }, 00:18:00.404 "claimed": true, 00:18:00.404 "claim_type": "exclusive_write", 00:18:00.404 "zoned": false, 00:18:00.404 "supported_io_types": { 00:18:00.404 "read": true, 00:18:00.404 "write": true, 00:18:00.404 "unmap": true, 00:18:00.404 "write_zeroes": true, 00:18:00.404 "flush": true, 00:18:00.404 "reset": true, 00:18:00.404 "compare": false, 00:18:00.404 "compare_and_write": false, 00:18:00.404 "abort": true, 00:18:00.404 "nvme_admin": false, 00:18:00.404 "nvme_io": false 00:18:00.404 }, 00:18:00.404 "memory_domains": [ 00:18:00.404 { 00:18:00.404 "dma_device_id": "system", 00:18:00.404 "dma_device_type": 1 00:18:00.404 }, 00:18:00.404 { 00:18:00.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.404 "dma_device_type": 2 00:18:00.404 } 00:18:00.404 ], 00:18:00.404 "driver_specific": {} 00:18:00.404 } 00:18:00.404 ] 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.404 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.663 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.663 "name": "Existed_Raid", 00:18:00.663 "uuid": "9480eac6-9805-4825-a54a-2322074a8a8a", 00:18:00.663 "strip_size_kb": 0, 00:18:00.663 "state": "configuring", 00:18:00.663 "raid_level": "raid1", 00:18:00.663 "superblock": true, 00:18:00.663 "num_base_bdevs": 3, 00:18:00.663 "num_base_bdevs_discovered": 1, 00:18:00.663 "num_base_bdevs_operational": 3, 00:18:00.663 "base_bdevs_list": [ 00:18:00.663 { 00:18:00.663 "name": "BaseBdev1", 00:18:00.663 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:00.663 "is_configured": true, 00:18:00.663 "data_offset": 2048, 00:18:00.663 "data_size": 63488 00:18:00.663 }, 00:18:00.663 { 00:18:00.663 "name": "BaseBdev2", 00:18:00.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.663 "is_configured": false, 00:18:00.663 "data_offset": 0, 00:18:00.663 "data_size": 0 00:18:00.663 }, 00:18:00.663 { 00:18:00.663 "name": "BaseBdev3", 00:18:00.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.663 "is_configured": false, 00:18:00.663 "data_offset": 0, 00:18:00.663 "data_size": 0 00:18:00.663 } 00:18:00.663 ] 00:18:00.663 }' 00:18:00.663 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.663 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.231 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:01.489 [2024-06-10 19:02:16.156603] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.489 [2024-06-10 19:02:16.156637] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x250e800 name Existed_Raid, state configuring 00:18:01.489 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:01.748 [2024-06-10 19:02:16.385231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.748 [2024-06-10 19:02:16.386640] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.748 [2024-06-10 19:02:16.386669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.748 [2024-06-10 19:02:16.386678] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.748 [2024-06-10 19:02:16.386689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.748 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.007 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.007 "name": "Existed_Raid", 00:18:02.007 "uuid": "f1031e58-a126-4bd1-af4b-4d09eb7f51ea", 00:18:02.007 "strip_size_kb": 0, 00:18:02.007 "state": "configuring", 00:18:02.007 "raid_level": "raid1", 00:18:02.007 "superblock": true, 00:18:02.007 "num_base_bdevs": 3, 00:18:02.007 "num_base_bdevs_discovered": 1, 00:18:02.007 "num_base_bdevs_operational": 3, 00:18:02.007 "base_bdevs_list": [ 00:18:02.007 { 00:18:02.007 "name": "BaseBdev1", 00:18:02.007 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:02.007 "is_configured": true, 00:18:02.007 "data_offset": 2048, 00:18:02.007 "data_size": 63488 00:18:02.007 }, 00:18:02.007 { 00:18:02.007 "name": "BaseBdev2", 00:18:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.007 "is_configured": false, 00:18:02.007 "data_offset": 0, 00:18:02.007 "data_size": 0 00:18:02.007 }, 00:18:02.007 { 00:18:02.007 "name": "BaseBdev3", 00:18:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.007 "is_configured": false, 00:18:02.007 "data_offset": 0, 00:18:02.007 "data_size": 0 00:18:02.007 } 00:18:02.007 ] 00:18:02.007 }' 00:18:02.007 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.007 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.576 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:02.836 [2024-06-10 19:02:17.411101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.836 BaseBdev2 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:02.836 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.095 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.353 [ 00:18:03.353 { 00:18:03.353 "name": "BaseBdev2", 00:18:03.353 "aliases": [ 00:18:03.353 "fefcbdc4-d455-40aa-9992-0ab7415057fc" 00:18:03.353 ], 00:18:03.353 "product_name": "Malloc disk", 00:18:03.353 "block_size": 512, 00:18:03.353 "num_blocks": 65536, 00:18:03.353 "uuid": "fefcbdc4-d455-40aa-9992-0ab7415057fc", 00:18:03.353 "assigned_rate_limits": { 00:18:03.353 "rw_ios_per_sec": 0, 00:18:03.353 "rw_mbytes_per_sec": 0, 00:18:03.353 "r_mbytes_per_sec": 0, 00:18:03.353 "w_mbytes_per_sec": 0 00:18:03.353 }, 00:18:03.353 "claimed": true, 00:18:03.353 "claim_type": "exclusive_write", 00:18:03.353 "zoned": false, 00:18:03.353 "supported_io_types": { 00:18:03.353 "read": true, 00:18:03.353 "write": true, 00:18:03.353 "unmap": true, 00:18:03.353 "write_zeroes": true, 00:18:03.353 "flush": true, 00:18:03.353 "reset": true, 00:18:03.353 "compare": false, 00:18:03.353 "compare_and_write": false, 00:18:03.353 "abort": true, 00:18:03.353 "nvme_admin": false, 00:18:03.353 "nvme_io": false 00:18:03.353 }, 00:18:03.353 "memory_domains": [ 00:18:03.353 { 00:18:03.353 "dma_device_id": "system", 00:18:03.353 "dma_device_type": 1 00:18:03.353 }, 00:18:03.353 { 00:18:03.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.353 "dma_device_type": 2 00:18:03.353 } 00:18:03.353 ], 00:18:03.353 "driver_specific": {} 00:18:03.353 } 00:18:03.353 ] 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.353 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.354 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.354 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.354 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.354 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.354 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.612 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.612 "name": "Existed_Raid", 00:18:03.612 "uuid": "f1031e58-a126-4bd1-af4b-4d09eb7f51ea", 00:18:03.612 "strip_size_kb": 0, 00:18:03.612 "state": "configuring", 00:18:03.612 "raid_level": "raid1", 00:18:03.612 "superblock": true, 00:18:03.612 "num_base_bdevs": 3, 00:18:03.612 "num_base_bdevs_discovered": 2, 00:18:03.612 "num_base_bdevs_operational": 3, 00:18:03.612 "base_bdevs_list": [ 00:18:03.612 { 00:18:03.612 "name": "BaseBdev1", 00:18:03.612 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:03.612 "is_configured": true, 00:18:03.612 "data_offset": 2048, 00:18:03.612 "data_size": 63488 00:18:03.612 }, 00:18:03.612 { 00:18:03.612 "name": "BaseBdev2", 00:18:03.612 "uuid": "fefcbdc4-d455-40aa-9992-0ab7415057fc", 00:18:03.612 "is_configured": true, 00:18:03.612 "data_offset": 2048, 00:18:03.612 "data_size": 63488 00:18:03.612 }, 00:18:03.612 { 00:18:03.612 "name": "BaseBdev3", 00:18:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.612 "is_configured": false, 00:18:03.612 "data_offset": 0, 00:18:03.612 "data_size": 0 00:18:03.612 } 00:18:03.612 ] 00:18:03.612 }' 00:18:03.612 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.612 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:04.179 [2024-06-10 19:02:18.914254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.179 [2024-06-10 19:02:18.914394] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x250f6f0 00:18:04.179 [2024-06-10 19:02:18.914407] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:04.179 [2024-06-10 19:02:18.914566] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x250f3c0 00:18:04.179 [2024-06-10 19:02:18.914687] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x250f6f0 00:18:04.179 [2024-06-10 19:02:18.914697] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x250f6f0 00:18:04.179 [2024-06-10 19:02:18.914785] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.179 BaseBdev3 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:04.179 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.438 19:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:04.697 [ 00:18:04.697 { 00:18:04.697 "name": "BaseBdev3", 00:18:04.697 "aliases": [ 00:18:04.697 "6a5b488b-bddf-4263-b6ab-c1dfa9c42979" 00:18:04.697 ], 00:18:04.697 "product_name": "Malloc disk", 00:18:04.697 "block_size": 512, 00:18:04.697 "num_blocks": 65536, 00:18:04.697 "uuid": "6a5b488b-bddf-4263-b6ab-c1dfa9c42979", 00:18:04.697 "assigned_rate_limits": { 00:18:04.697 "rw_ios_per_sec": 0, 00:18:04.697 "rw_mbytes_per_sec": 0, 00:18:04.697 "r_mbytes_per_sec": 0, 00:18:04.697 "w_mbytes_per_sec": 0 00:18:04.697 }, 00:18:04.697 "claimed": true, 00:18:04.697 "claim_type": "exclusive_write", 00:18:04.697 "zoned": false, 00:18:04.698 "supported_io_types": { 00:18:04.698 "read": true, 00:18:04.698 "write": true, 00:18:04.698 "unmap": true, 00:18:04.698 "write_zeroes": true, 00:18:04.698 "flush": true, 00:18:04.698 "reset": true, 00:18:04.698 "compare": false, 00:18:04.698 "compare_and_write": false, 00:18:04.698 "abort": true, 00:18:04.698 "nvme_admin": false, 00:18:04.698 "nvme_io": false 00:18:04.698 }, 00:18:04.698 "memory_domains": [ 00:18:04.698 { 00:18:04.698 "dma_device_id": "system", 00:18:04.698 "dma_device_type": 1 00:18:04.698 }, 00:18:04.698 { 00:18:04.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.698 "dma_device_type": 2 00:18:04.698 } 00:18:04.698 ], 00:18:04.698 "driver_specific": {} 00:18:04.698 } 00:18:04.698 ] 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.698 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.957 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.957 "name": "Existed_Raid", 00:18:04.957 "uuid": "f1031e58-a126-4bd1-af4b-4d09eb7f51ea", 00:18:04.957 "strip_size_kb": 0, 00:18:04.957 "state": "online", 00:18:04.957 "raid_level": "raid1", 00:18:04.957 "superblock": true, 00:18:04.957 "num_base_bdevs": 3, 00:18:04.957 "num_base_bdevs_discovered": 3, 00:18:04.957 "num_base_bdevs_operational": 3, 00:18:04.957 "base_bdevs_list": [ 00:18:04.957 { 00:18:04.957 "name": "BaseBdev1", 00:18:04.957 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:04.957 "is_configured": true, 00:18:04.957 "data_offset": 2048, 00:18:04.957 "data_size": 63488 00:18:04.957 }, 00:18:04.957 { 00:18:04.957 "name": "BaseBdev2", 00:18:04.957 "uuid": "fefcbdc4-d455-40aa-9992-0ab7415057fc", 00:18:04.957 "is_configured": true, 00:18:04.957 "data_offset": 2048, 00:18:04.957 "data_size": 63488 00:18:04.957 }, 00:18:04.957 { 00:18:04.957 "name": "BaseBdev3", 00:18:04.957 "uuid": "6a5b488b-bddf-4263-b6ab-c1dfa9c42979", 00:18:04.957 "is_configured": true, 00:18:04.957 "data_offset": 2048, 00:18:04.957 "data_size": 63488 00:18:04.957 } 00:18:04.957 ] 00:18:04.957 }' 00:18:04.957 19:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.957 19:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:05.525 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:05.784 [2024-06-10 19:02:20.382439] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.784 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:05.784 "name": "Existed_Raid", 00:18:05.784 "aliases": [ 00:18:05.784 "f1031e58-a126-4bd1-af4b-4d09eb7f51ea" 00:18:05.784 ], 00:18:05.784 "product_name": "Raid Volume", 00:18:05.784 "block_size": 512, 00:18:05.784 "num_blocks": 63488, 00:18:05.784 "uuid": "f1031e58-a126-4bd1-af4b-4d09eb7f51ea", 00:18:05.784 "assigned_rate_limits": { 00:18:05.784 "rw_ios_per_sec": 0, 00:18:05.784 "rw_mbytes_per_sec": 0, 00:18:05.784 "r_mbytes_per_sec": 0, 00:18:05.784 "w_mbytes_per_sec": 0 00:18:05.784 }, 00:18:05.784 "claimed": false, 00:18:05.784 "zoned": false, 00:18:05.784 "supported_io_types": { 00:18:05.784 "read": true, 00:18:05.784 "write": true, 00:18:05.784 "unmap": false, 00:18:05.784 "write_zeroes": true, 00:18:05.784 "flush": false, 00:18:05.784 "reset": true, 00:18:05.784 "compare": false, 00:18:05.784 "compare_and_write": false, 00:18:05.784 "abort": false, 00:18:05.784 "nvme_admin": false, 00:18:05.784 "nvme_io": false 00:18:05.784 }, 00:18:05.784 "memory_domains": [ 00:18:05.784 { 00:18:05.784 "dma_device_id": "system", 00:18:05.784 "dma_device_type": 1 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.784 "dma_device_type": 2 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "dma_device_id": "system", 00:18:05.784 "dma_device_type": 1 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.784 "dma_device_type": 2 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "dma_device_id": "system", 00:18:05.784 "dma_device_type": 1 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.784 "dma_device_type": 2 00:18:05.784 } 00:18:05.784 ], 00:18:05.784 "driver_specific": { 00:18:05.784 "raid": { 00:18:05.784 "uuid": "f1031e58-a126-4bd1-af4b-4d09eb7f51ea", 00:18:05.784 "strip_size_kb": 0, 00:18:05.784 "state": "online", 00:18:05.784 "raid_level": "raid1", 00:18:05.784 "superblock": true, 00:18:05.784 "num_base_bdevs": 3, 00:18:05.784 "num_base_bdevs_discovered": 3, 00:18:05.784 "num_base_bdevs_operational": 3, 00:18:05.784 "base_bdevs_list": [ 00:18:05.784 { 00:18:05.784 "name": "BaseBdev1", 00:18:05.784 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:05.784 "is_configured": true, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "name": "BaseBdev2", 00:18:05.784 "uuid": "fefcbdc4-d455-40aa-9992-0ab7415057fc", 00:18:05.784 "is_configured": true, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "name": "BaseBdev3", 00:18:05.784 "uuid": "6a5b488b-bddf-4263-b6ab-c1dfa9c42979", 00:18:05.784 "is_configured": true, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 } 00:18:05.784 ] 00:18:05.784 } 00:18:05.784 } 00:18:05.784 }' 00:18:05.784 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.784 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:05.784 BaseBdev2 00:18:05.784 BaseBdev3' 00:18:05.784 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.784 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:05.784 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.044 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.044 "name": "BaseBdev1", 00:18:06.044 "aliases": [ 00:18:06.044 "1d3668e8-df92-46b5-abdc-b0e3ac7c970f" 00:18:06.044 ], 00:18:06.044 "product_name": "Malloc disk", 00:18:06.044 "block_size": 512, 00:18:06.044 "num_blocks": 65536, 00:18:06.044 "uuid": "1d3668e8-df92-46b5-abdc-b0e3ac7c970f", 00:18:06.044 "assigned_rate_limits": { 00:18:06.044 "rw_ios_per_sec": 0, 00:18:06.044 "rw_mbytes_per_sec": 0, 00:18:06.044 "r_mbytes_per_sec": 0, 00:18:06.044 "w_mbytes_per_sec": 0 00:18:06.044 }, 00:18:06.044 "claimed": true, 00:18:06.044 "claim_type": "exclusive_write", 00:18:06.044 "zoned": false, 00:18:06.044 "supported_io_types": { 00:18:06.044 "read": true, 00:18:06.044 "write": true, 00:18:06.044 "unmap": true, 00:18:06.044 "write_zeroes": true, 00:18:06.044 "flush": true, 00:18:06.044 "reset": true, 00:18:06.044 "compare": false, 00:18:06.044 "compare_and_write": false, 00:18:06.044 "abort": true, 00:18:06.044 "nvme_admin": false, 00:18:06.044 "nvme_io": false 00:18:06.044 }, 00:18:06.044 "memory_domains": [ 00:18:06.044 { 00:18:06.044 "dma_device_id": "system", 00:18:06.044 "dma_device_type": 1 00:18:06.044 }, 00:18:06.044 { 00:18:06.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.044 "dma_device_type": 2 00:18:06.044 } 00:18:06.044 ], 00:18:06.044 "driver_specific": {} 00:18:06.044 }' 00:18:06.044 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.044 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.044 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.044 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.303 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.303 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.303 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.303 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:06.303 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.563 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.563 "name": "BaseBdev2", 00:18:06.563 "aliases": [ 00:18:06.563 "fefcbdc4-d455-40aa-9992-0ab7415057fc" 00:18:06.563 ], 00:18:06.563 "product_name": "Malloc disk", 00:18:06.563 "block_size": 512, 00:18:06.563 "num_blocks": 65536, 00:18:06.563 "uuid": "fefcbdc4-d455-40aa-9992-0ab7415057fc", 00:18:06.563 "assigned_rate_limits": { 00:18:06.563 "rw_ios_per_sec": 0, 00:18:06.563 "rw_mbytes_per_sec": 0, 00:18:06.563 "r_mbytes_per_sec": 0, 00:18:06.563 "w_mbytes_per_sec": 0 00:18:06.563 }, 00:18:06.563 "claimed": true, 00:18:06.563 "claim_type": "exclusive_write", 00:18:06.563 "zoned": false, 00:18:06.563 "supported_io_types": { 00:18:06.563 "read": true, 00:18:06.563 "write": true, 00:18:06.563 "unmap": true, 00:18:06.563 "write_zeroes": true, 00:18:06.563 "flush": true, 00:18:06.563 "reset": true, 00:18:06.563 "compare": false, 00:18:06.563 "compare_and_write": false, 00:18:06.563 "abort": true, 00:18:06.563 "nvme_admin": false, 00:18:06.563 "nvme_io": false 00:18:06.563 }, 00:18:06.563 "memory_domains": [ 00:18:06.563 { 00:18:06.563 "dma_device_id": "system", 00:18:06.563 "dma_device_type": 1 00:18:06.563 }, 00:18:06.563 { 00:18:06.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.563 "dma_device_type": 2 00:18:06.563 } 00:18:06.563 ], 00:18:06.563 "driver_specific": {} 00:18:06.563 }' 00:18:06.563 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.563 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.822 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.081 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.081 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.081 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:07.081 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.081 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.081 "name": "BaseBdev3", 00:18:07.081 "aliases": [ 00:18:07.081 "6a5b488b-bddf-4263-b6ab-c1dfa9c42979" 00:18:07.081 ], 00:18:07.081 "product_name": "Malloc disk", 00:18:07.081 "block_size": 512, 00:18:07.081 "num_blocks": 65536, 00:18:07.081 "uuid": "6a5b488b-bddf-4263-b6ab-c1dfa9c42979", 00:18:07.081 "assigned_rate_limits": { 00:18:07.081 "rw_ios_per_sec": 0, 00:18:07.081 "rw_mbytes_per_sec": 0, 00:18:07.081 "r_mbytes_per_sec": 0, 00:18:07.081 "w_mbytes_per_sec": 0 00:18:07.081 }, 00:18:07.081 "claimed": true, 00:18:07.081 "claim_type": "exclusive_write", 00:18:07.081 "zoned": false, 00:18:07.081 "supported_io_types": { 00:18:07.081 "read": true, 00:18:07.081 "write": true, 00:18:07.081 "unmap": true, 00:18:07.081 "write_zeroes": true, 00:18:07.081 "flush": true, 00:18:07.081 "reset": true, 00:18:07.081 "compare": false, 00:18:07.081 "compare_and_write": false, 00:18:07.081 "abort": true, 00:18:07.081 "nvme_admin": false, 00:18:07.081 "nvme_io": false 00:18:07.081 }, 00:18:07.081 "memory_domains": [ 00:18:07.081 { 00:18:07.081 "dma_device_id": "system", 00:18:07.081 "dma_device_type": 1 00:18:07.081 }, 00:18:07.081 { 00:18:07.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.081 "dma_device_type": 2 00:18:07.081 } 00:18:07.081 ], 00:18:07.081 "driver_specific": {} 00:18:07.081 }' 00:18:07.081 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.340 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.340 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.340 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.340 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.340 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.340 19:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.340 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.340 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.340 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.598 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.598 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.598 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:07.598 [2024-06-10 19:02:22.351419] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.856 "name": "Existed_Raid", 00:18:07.856 "uuid": "f1031e58-a126-4bd1-af4b-4d09eb7f51ea", 00:18:07.856 "strip_size_kb": 0, 00:18:07.856 "state": "online", 00:18:07.856 "raid_level": "raid1", 00:18:07.856 "superblock": true, 00:18:07.856 "num_base_bdevs": 3, 00:18:07.856 "num_base_bdevs_discovered": 2, 00:18:07.856 "num_base_bdevs_operational": 2, 00:18:07.856 "base_bdevs_list": [ 00:18:07.856 { 00:18:07.856 "name": null, 00:18:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.856 "is_configured": false, 00:18:07.856 "data_offset": 2048, 00:18:07.856 "data_size": 63488 00:18:07.856 }, 00:18:07.856 { 00:18:07.856 "name": "BaseBdev2", 00:18:07.856 "uuid": "fefcbdc4-d455-40aa-9992-0ab7415057fc", 00:18:07.856 "is_configured": true, 00:18:07.856 "data_offset": 2048, 00:18:07.856 "data_size": 63488 00:18:07.856 }, 00:18:07.856 { 00:18:07.856 "name": "BaseBdev3", 00:18:07.856 "uuid": "6a5b488b-bddf-4263-b6ab-c1dfa9c42979", 00:18:07.856 "is_configured": true, 00:18:07.856 "data_offset": 2048, 00:18:07.856 "data_size": 63488 00:18:07.856 } 00:18:07.856 ] 00:18:07.856 }' 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.856 19:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.423 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:08.423 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:08.423 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.423 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:08.682 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:08.682 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:08.682 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:08.941 [2024-06-10 19:02:23.595770] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.941 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:08.941 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:08.941 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.941 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:09.200 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:09.200 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:09.200 19:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:09.460 [2024-06-10 19:02:24.051112] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:09.460 [2024-06-10 19:02:24.051181] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.460 [2024-06-10 19:02:24.061515] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.460 [2024-06-10 19:02:24.061543] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.460 [2024-06-10 19:02:24.061554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x250f6f0 name Existed_Raid, state offline 00:18:09.460 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:09.460 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:09.460 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.460 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:09.720 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:09.720 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:09.720 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:09.720 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:09.720 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:09.720 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:09.979 BaseBdev2 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:09.979 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.238 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.238 [ 00:18:10.238 { 00:18:10.238 "name": "BaseBdev2", 00:18:10.239 "aliases": [ 00:18:10.239 "32c979e2-c2b4-4f1e-886f-565b4eef8f42" 00:18:10.239 ], 00:18:10.239 "product_name": "Malloc disk", 00:18:10.239 "block_size": 512, 00:18:10.239 "num_blocks": 65536, 00:18:10.239 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:10.239 "assigned_rate_limits": { 00:18:10.239 "rw_ios_per_sec": 0, 00:18:10.239 "rw_mbytes_per_sec": 0, 00:18:10.239 "r_mbytes_per_sec": 0, 00:18:10.239 "w_mbytes_per_sec": 0 00:18:10.239 }, 00:18:10.239 "claimed": false, 00:18:10.239 "zoned": false, 00:18:10.239 "supported_io_types": { 00:18:10.239 "read": true, 00:18:10.239 "write": true, 00:18:10.239 "unmap": true, 00:18:10.239 "write_zeroes": true, 00:18:10.239 "flush": true, 00:18:10.239 "reset": true, 00:18:10.239 "compare": false, 00:18:10.239 "compare_and_write": false, 00:18:10.239 "abort": true, 00:18:10.239 "nvme_admin": false, 00:18:10.239 "nvme_io": false 00:18:10.239 }, 00:18:10.239 "memory_domains": [ 00:18:10.239 { 00:18:10.239 "dma_device_id": "system", 00:18:10.239 "dma_device_type": 1 00:18:10.239 }, 00:18:10.239 { 00:18:10.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.239 "dma_device_type": 2 00:18:10.239 } 00:18:10.239 ], 00:18:10.239 "driver_specific": {} 00:18:10.239 } 00:18:10.239 ] 00:18:10.239 19:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:10.239 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:10.239 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:10.239 19:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:10.498 BaseBdev3 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:10.498 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.758 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:11.018 [ 00:18:11.018 { 00:18:11.018 "name": "BaseBdev3", 00:18:11.018 "aliases": [ 00:18:11.018 "edd23cb4-1027-4cea-83ed-5e9f01e962e4" 00:18:11.018 ], 00:18:11.018 "product_name": "Malloc disk", 00:18:11.018 "block_size": 512, 00:18:11.018 "num_blocks": 65536, 00:18:11.018 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:11.018 "assigned_rate_limits": { 00:18:11.018 "rw_ios_per_sec": 0, 00:18:11.018 "rw_mbytes_per_sec": 0, 00:18:11.018 "r_mbytes_per_sec": 0, 00:18:11.018 "w_mbytes_per_sec": 0 00:18:11.018 }, 00:18:11.018 "claimed": false, 00:18:11.018 "zoned": false, 00:18:11.018 "supported_io_types": { 00:18:11.018 "read": true, 00:18:11.018 "write": true, 00:18:11.018 "unmap": true, 00:18:11.018 "write_zeroes": true, 00:18:11.018 "flush": true, 00:18:11.018 "reset": true, 00:18:11.018 "compare": false, 00:18:11.018 "compare_and_write": false, 00:18:11.018 "abort": true, 00:18:11.018 "nvme_admin": false, 00:18:11.018 "nvme_io": false 00:18:11.018 }, 00:18:11.018 "memory_domains": [ 00:18:11.018 { 00:18:11.018 "dma_device_id": "system", 00:18:11.018 "dma_device_type": 1 00:18:11.018 }, 00:18:11.018 { 00:18:11.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.018 "dma_device_type": 2 00:18:11.018 } 00:18:11.018 ], 00:18:11.018 "driver_specific": {} 00:18:11.018 } 00:18:11.018 ] 00:18:11.018 19:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:11.018 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:11.018 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:11.018 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:11.277 [2024-06-10 19:02:25.842431] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.277 [2024-06-10 19:02:25.842466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.277 [2024-06-10 19:02:25.842484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.277 [2024-06-10 19:02:25.843730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.277 19:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.536 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.536 "name": "Existed_Raid", 00:18:11.536 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:11.536 "strip_size_kb": 0, 00:18:11.536 "state": "configuring", 00:18:11.536 "raid_level": "raid1", 00:18:11.536 "superblock": true, 00:18:11.536 "num_base_bdevs": 3, 00:18:11.536 "num_base_bdevs_discovered": 2, 00:18:11.536 "num_base_bdevs_operational": 3, 00:18:11.536 "base_bdevs_list": [ 00:18:11.536 { 00:18:11.536 "name": "BaseBdev1", 00:18:11.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.536 "is_configured": false, 00:18:11.536 "data_offset": 0, 00:18:11.536 "data_size": 0 00:18:11.536 }, 00:18:11.536 { 00:18:11.536 "name": "BaseBdev2", 00:18:11.536 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:11.536 "is_configured": true, 00:18:11.536 "data_offset": 2048, 00:18:11.536 "data_size": 63488 00:18:11.536 }, 00:18:11.536 { 00:18:11.536 "name": "BaseBdev3", 00:18:11.536 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:11.536 "is_configured": true, 00:18:11.536 "data_offset": 2048, 00:18:11.536 "data_size": 63488 00:18:11.536 } 00:18:11.536 ] 00:18:11.536 }' 00:18:11.536 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.536 19:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.101 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:12.362 [2024-06-10 19:02:26.865158] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.362 19:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.649 19:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.649 "name": "Existed_Raid", 00:18:12.649 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:12.649 "strip_size_kb": 0, 00:18:12.649 "state": "configuring", 00:18:12.649 "raid_level": "raid1", 00:18:12.649 "superblock": true, 00:18:12.649 "num_base_bdevs": 3, 00:18:12.649 "num_base_bdevs_discovered": 1, 00:18:12.649 "num_base_bdevs_operational": 3, 00:18:12.649 "base_bdevs_list": [ 00:18:12.649 { 00:18:12.649 "name": "BaseBdev1", 00:18:12.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.649 "is_configured": false, 00:18:12.649 "data_offset": 0, 00:18:12.649 "data_size": 0 00:18:12.649 }, 00:18:12.649 { 00:18:12.649 "name": null, 00:18:12.649 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:12.649 "is_configured": false, 00:18:12.649 "data_offset": 2048, 00:18:12.649 "data_size": 63488 00:18:12.649 }, 00:18:12.649 { 00:18:12.649 "name": "BaseBdev3", 00:18:12.649 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:12.649 "is_configured": true, 00:18:12.649 "data_offset": 2048, 00:18:12.649 "data_size": 63488 00:18:12.649 } 00:18:12.649 ] 00:18:12.649 }' 00:18:12.649 19:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.649 19:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.908 19:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.908 19:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.167 19:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:13.167 19:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:13.427 [2024-06-10 19:02:28.087632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.427 BaseBdev1 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:13.427 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.686 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.945 [ 00:18:13.945 { 00:18:13.945 "name": "BaseBdev1", 00:18:13.945 "aliases": [ 00:18:13.945 "0604fdd9-9fec-483e-ae2f-bb693233ea93" 00:18:13.945 ], 00:18:13.945 "product_name": "Malloc disk", 00:18:13.945 "block_size": 512, 00:18:13.945 "num_blocks": 65536, 00:18:13.945 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:13.945 "assigned_rate_limits": { 00:18:13.945 "rw_ios_per_sec": 0, 00:18:13.945 "rw_mbytes_per_sec": 0, 00:18:13.945 "r_mbytes_per_sec": 0, 00:18:13.945 "w_mbytes_per_sec": 0 00:18:13.945 }, 00:18:13.945 "claimed": true, 00:18:13.945 "claim_type": "exclusive_write", 00:18:13.945 "zoned": false, 00:18:13.945 "supported_io_types": { 00:18:13.945 "read": true, 00:18:13.945 "write": true, 00:18:13.945 "unmap": true, 00:18:13.945 "write_zeroes": true, 00:18:13.945 "flush": true, 00:18:13.945 "reset": true, 00:18:13.945 "compare": false, 00:18:13.945 "compare_and_write": false, 00:18:13.945 "abort": true, 00:18:13.945 "nvme_admin": false, 00:18:13.945 "nvme_io": false 00:18:13.945 }, 00:18:13.945 "memory_domains": [ 00:18:13.945 { 00:18:13.945 "dma_device_id": "system", 00:18:13.945 "dma_device_type": 1 00:18:13.945 }, 00:18:13.945 { 00:18:13.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.945 "dma_device_type": 2 00:18:13.945 } 00:18:13.945 ], 00:18:13.945 "driver_specific": {} 00:18:13.945 } 00:18:13.945 ] 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.945 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.203 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.203 "name": "Existed_Raid", 00:18:14.203 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:14.203 "strip_size_kb": 0, 00:18:14.203 "state": "configuring", 00:18:14.203 "raid_level": "raid1", 00:18:14.203 "superblock": true, 00:18:14.203 "num_base_bdevs": 3, 00:18:14.203 "num_base_bdevs_discovered": 2, 00:18:14.203 "num_base_bdevs_operational": 3, 00:18:14.203 "base_bdevs_list": [ 00:18:14.203 { 00:18:14.203 "name": "BaseBdev1", 00:18:14.203 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:14.203 "is_configured": true, 00:18:14.203 "data_offset": 2048, 00:18:14.203 "data_size": 63488 00:18:14.203 }, 00:18:14.203 { 00:18:14.203 "name": null, 00:18:14.203 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:14.203 "is_configured": false, 00:18:14.203 "data_offset": 2048, 00:18:14.203 "data_size": 63488 00:18:14.203 }, 00:18:14.203 { 00:18:14.203 "name": "BaseBdev3", 00:18:14.203 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:14.203 "is_configured": true, 00:18:14.203 "data_offset": 2048, 00:18:14.203 "data_size": 63488 00:18:14.203 } 00:18:14.203 ] 00:18:14.203 }' 00:18:14.203 19:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.204 19:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.770 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.770 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:15.028 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:15.028 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:15.288 [2024-06-10 19:02:29.800208] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.288 19:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.288 19:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.288 "name": "Existed_Raid", 00:18:15.288 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:15.288 "strip_size_kb": 0, 00:18:15.288 "state": "configuring", 00:18:15.288 "raid_level": "raid1", 00:18:15.288 "superblock": true, 00:18:15.288 "num_base_bdevs": 3, 00:18:15.288 "num_base_bdevs_discovered": 1, 00:18:15.288 "num_base_bdevs_operational": 3, 00:18:15.288 "base_bdevs_list": [ 00:18:15.288 { 00:18:15.288 "name": "BaseBdev1", 00:18:15.288 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:15.288 "is_configured": true, 00:18:15.288 "data_offset": 2048, 00:18:15.288 "data_size": 63488 00:18:15.288 }, 00:18:15.288 { 00:18:15.288 "name": null, 00:18:15.288 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:15.288 "is_configured": false, 00:18:15.288 "data_offset": 2048, 00:18:15.288 "data_size": 63488 00:18:15.288 }, 00:18:15.288 { 00:18:15.288 "name": null, 00:18:15.288 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:15.288 "is_configured": false, 00:18:15.288 "data_offset": 2048, 00:18:15.288 "data_size": 63488 00:18:15.288 } 00:18:15.288 ] 00:18:15.288 }' 00:18:15.288 19:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.288 19:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.223 19:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.223 19:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:16.223 19:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:16.223 19:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:16.482 [2024-06-10 19:02:31.055532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.482 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.741 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.741 "name": "Existed_Raid", 00:18:16.741 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:16.741 "strip_size_kb": 0, 00:18:16.741 "state": "configuring", 00:18:16.741 "raid_level": "raid1", 00:18:16.741 "superblock": true, 00:18:16.741 "num_base_bdevs": 3, 00:18:16.741 "num_base_bdevs_discovered": 2, 00:18:16.741 "num_base_bdevs_operational": 3, 00:18:16.741 "base_bdevs_list": [ 00:18:16.741 { 00:18:16.741 "name": "BaseBdev1", 00:18:16.741 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:16.741 "is_configured": true, 00:18:16.741 "data_offset": 2048, 00:18:16.741 "data_size": 63488 00:18:16.741 }, 00:18:16.741 { 00:18:16.741 "name": null, 00:18:16.741 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:16.741 "is_configured": false, 00:18:16.741 "data_offset": 2048, 00:18:16.741 "data_size": 63488 00:18:16.741 }, 00:18:16.741 { 00:18:16.741 "name": "BaseBdev3", 00:18:16.741 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:16.741 "is_configured": true, 00:18:16.741 "data_offset": 2048, 00:18:16.741 "data_size": 63488 00:18:16.741 } 00:18:16.741 ] 00:18:16.741 }' 00:18:16.741 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.741 19:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.307 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.307 19:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:17.566 [2024-06-10 19:02:32.274783] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.566 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.825 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.825 "name": "Existed_Raid", 00:18:17.825 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:17.825 "strip_size_kb": 0, 00:18:17.825 "state": "configuring", 00:18:17.825 "raid_level": "raid1", 00:18:17.825 "superblock": true, 00:18:17.825 "num_base_bdevs": 3, 00:18:17.825 "num_base_bdevs_discovered": 1, 00:18:17.825 "num_base_bdevs_operational": 3, 00:18:17.825 "base_bdevs_list": [ 00:18:17.825 { 00:18:17.825 "name": null, 00:18:17.825 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:17.825 "is_configured": false, 00:18:17.825 "data_offset": 2048, 00:18:17.825 "data_size": 63488 00:18:17.825 }, 00:18:17.825 { 00:18:17.825 "name": null, 00:18:17.825 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:17.825 "is_configured": false, 00:18:17.825 "data_offset": 2048, 00:18:17.825 "data_size": 63488 00:18:17.825 }, 00:18:17.825 { 00:18:17.825 "name": "BaseBdev3", 00:18:17.825 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:17.825 "is_configured": true, 00:18:17.825 "data_offset": 2048, 00:18:17.825 "data_size": 63488 00:18:17.825 } 00:18:17.825 ] 00:18:17.825 }' 00:18:17.825 19:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.825 19:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.394 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.394 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:18.653 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:18.653 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:18.912 [2024-06-10 19:02:33.467946] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.912 "name": "Existed_Raid", 00:18:18.912 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:18.912 "strip_size_kb": 0, 00:18:18.912 "state": "configuring", 00:18:18.912 "raid_level": "raid1", 00:18:18.912 "superblock": true, 00:18:18.912 "num_base_bdevs": 3, 00:18:18.912 "num_base_bdevs_discovered": 2, 00:18:18.912 "num_base_bdevs_operational": 3, 00:18:18.912 "base_bdevs_list": [ 00:18:18.912 { 00:18:18.912 "name": null, 00:18:18.912 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:18.912 "is_configured": false, 00:18:18.912 "data_offset": 2048, 00:18:18.912 "data_size": 63488 00:18:18.912 }, 00:18:18.912 { 00:18:18.912 "name": "BaseBdev2", 00:18:18.912 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:18.912 "is_configured": true, 00:18:18.912 "data_offset": 2048, 00:18:18.912 "data_size": 63488 00:18:18.912 }, 00:18:18.912 { 00:18:18.912 "name": "BaseBdev3", 00:18:18.912 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:18.912 "is_configured": true, 00:18:18.912 "data_offset": 2048, 00:18:18.912 "data_size": 63488 00:18:18.912 } 00:18:18.912 ] 00:18:18.912 }' 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.912 19:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.481 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.481 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:19.740 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:19.740 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.741 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:20.000 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0604fdd9-9fec-483e-ae2f-bb693233ea93 00:18:20.259 [2024-06-10 19:02:34.866939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:20.259 [2024-06-10 19:02:34.867070] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x250f340 00:18:20.259 [2024-06-10 19:02:34.867081] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.259 [2024-06-10 19:02:34.867238] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26c2b40 00:18:20.260 [2024-06-10 19:02:34.867342] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x250f340 00:18:20.260 [2024-06-10 19:02:34.867351] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x250f340 00:18:20.260 [2024-06-10 19:02:34.867434] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.260 NewBaseBdev 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:20.260 19:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.519 19:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:20.779 [ 00:18:20.779 { 00:18:20.779 "name": "NewBaseBdev", 00:18:20.779 "aliases": [ 00:18:20.779 "0604fdd9-9fec-483e-ae2f-bb693233ea93" 00:18:20.779 ], 00:18:20.779 "product_name": "Malloc disk", 00:18:20.779 "block_size": 512, 00:18:20.779 "num_blocks": 65536, 00:18:20.779 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:20.779 "assigned_rate_limits": { 00:18:20.779 "rw_ios_per_sec": 0, 00:18:20.779 "rw_mbytes_per_sec": 0, 00:18:20.779 "r_mbytes_per_sec": 0, 00:18:20.779 "w_mbytes_per_sec": 0 00:18:20.779 }, 00:18:20.779 "claimed": true, 00:18:20.779 "claim_type": "exclusive_write", 00:18:20.779 "zoned": false, 00:18:20.779 "supported_io_types": { 00:18:20.779 "read": true, 00:18:20.779 "write": true, 00:18:20.779 "unmap": true, 00:18:20.779 "write_zeroes": true, 00:18:20.779 "flush": true, 00:18:20.779 "reset": true, 00:18:20.779 "compare": false, 00:18:20.779 "compare_and_write": false, 00:18:20.779 "abort": true, 00:18:20.779 "nvme_admin": false, 00:18:20.779 "nvme_io": false 00:18:20.779 }, 00:18:20.779 "memory_domains": [ 00:18:20.779 { 00:18:20.779 "dma_device_id": "system", 00:18:20.779 "dma_device_type": 1 00:18:20.779 }, 00:18:20.779 { 00:18:20.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.779 "dma_device_type": 2 00:18:20.779 } 00:18:20.779 ], 00:18:20.779 "driver_specific": {} 00:18:20.779 } 00:18:20.779 ] 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.779 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.779 "name": "Existed_Raid", 00:18:20.779 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:20.779 "strip_size_kb": 0, 00:18:20.779 "state": "online", 00:18:20.779 "raid_level": "raid1", 00:18:20.779 "superblock": true, 00:18:20.779 "num_base_bdevs": 3, 00:18:20.779 "num_base_bdevs_discovered": 3, 00:18:20.779 "num_base_bdevs_operational": 3, 00:18:20.779 "base_bdevs_list": [ 00:18:20.779 { 00:18:20.779 "name": "NewBaseBdev", 00:18:20.779 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:20.779 "is_configured": true, 00:18:20.779 "data_offset": 2048, 00:18:20.779 "data_size": 63488 00:18:20.779 }, 00:18:20.779 { 00:18:20.779 "name": "BaseBdev2", 00:18:20.779 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:20.779 "is_configured": true, 00:18:20.779 "data_offset": 2048, 00:18:20.779 "data_size": 63488 00:18:20.779 }, 00:18:20.779 { 00:18:20.779 "name": "BaseBdev3", 00:18:20.779 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:20.779 "is_configured": true, 00:18:20.779 "data_offset": 2048, 00:18:20.779 "data_size": 63488 00:18:20.779 } 00:18:20.779 ] 00:18:20.779 }' 00:18:21.038 19:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.038 19:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:21.607 [2024-06-10 19:02:36.310998] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.607 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:21.607 "name": "Existed_Raid", 00:18:21.607 "aliases": [ 00:18:21.607 "96b26ff4-db99-4a85-86cd-e025e2658da8" 00:18:21.607 ], 00:18:21.607 "product_name": "Raid Volume", 00:18:21.607 "block_size": 512, 00:18:21.607 "num_blocks": 63488, 00:18:21.607 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:21.607 "assigned_rate_limits": { 00:18:21.607 "rw_ios_per_sec": 0, 00:18:21.607 "rw_mbytes_per_sec": 0, 00:18:21.607 "r_mbytes_per_sec": 0, 00:18:21.607 "w_mbytes_per_sec": 0 00:18:21.607 }, 00:18:21.607 "claimed": false, 00:18:21.607 "zoned": false, 00:18:21.607 "supported_io_types": { 00:18:21.607 "read": true, 00:18:21.607 "write": true, 00:18:21.607 "unmap": false, 00:18:21.607 "write_zeroes": true, 00:18:21.607 "flush": false, 00:18:21.607 "reset": true, 00:18:21.607 "compare": false, 00:18:21.607 "compare_and_write": false, 00:18:21.607 "abort": false, 00:18:21.607 "nvme_admin": false, 00:18:21.607 "nvme_io": false 00:18:21.607 }, 00:18:21.607 "memory_domains": [ 00:18:21.607 { 00:18:21.607 "dma_device_id": "system", 00:18:21.607 "dma_device_type": 1 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.607 "dma_device_type": 2 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "dma_device_id": "system", 00:18:21.607 "dma_device_type": 1 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.607 "dma_device_type": 2 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "dma_device_id": "system", 00:18:21.607 "dma_device_type": 1 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.607 "dma_device_type": 2 00:18:21.607 } 00:18:21.607 ], 00:18:21.607 "driver_specific": { 00:18:21.607 "raid": { 00:18:21.607 "uuid": "96b26ff4-db99-4a85-86cd-e025e2658da8", 00:18:21.607 "strip_size_kb": 0, 00:18:21.607 "state": "online", 00:18:21.607 "raid_level": "raid1", 00:18:21.607 "superblock": true, 00:18:21.607 "num_base_bdevs": 3, 00:18:21.607 "num_base_bdevs_discovered": 3, 00:18:21.607 "num_base_bdevs_operational": 3, 00:18:21.607 "base_bdevs_list": [ 00:18:21.607 { 00:18:21.607 "name": "NewBaseBdev", 00:18:21.607 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:21.607 "is_configured": true, 00:18:21.607 "data_offset": 2048, 00:18:21.607 "data_size": 63488 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "name": "BaseBdev2", 00:18:21.607 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:21.607 "is_configured": true, 00:18:21.607 "data_offset": 2048, 00:18:21.607 "data_size": 63488 00:18:21.607 }, 00:18:21.607 { 00:18:21.607 "name": "BaseBdev3", 00:18:21.607 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:21.607 "is_configured": true, 00:18:21.607 "data_offset": 2048, 00:18:21.607 "data_size": 63488 00:18:21.607 } 00:18:21.607 ] 00:18:21.607 } 00:18:21.607 } 00:18:21.607 }' 00:18:21.608 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.867 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:21.867 BaseBdev2 00:18:21.867 BaseBdev3' 00:18:21.867 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:21.867 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:21.867 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:21.867 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:21.867 "name": "NewBaseBdev", 00:18:21.867 "aliases": [ 00:18:21.867 "0604fdd9-9fec-483e-ae2f-bb693233ea93" 00:18:21.867 ], 00:18:21.867 "product_name": "Malloc disk", 00:18:21.867 "block_size": 512, 00:18:21.867 "num_blocks": 65536, 00:18:21.867 "uuid": "0604fdd9-9fec-483e-ae2f-bb693233ea93", 00:18:21.867 "assigned_rate_limits": { 00:18:21.867 "rw_ios_per_sec": 0, 00:18:21.867 "rw_mbytes_per_sec": 0, 00:18:21.867 "r_mbytes_per_sec": 0, 00:18:21.867 "w_mbytes_per_sec": 0 00:18:21.867 }, 00:18:21.867 "claimed": true, 00:18:21.867 "claim_type": "exclusive_write", 00:18:21.867 "zoned": false, 00:18:21.867 "supported_io_types": { 00:18:21.867 "read": true, 00:18:21.867 "write": true, 00:18:21.867 "unmap": true, 00:18:21.867 "write_zeroes": true, 00:18:21.867 "flush": true, 00:18:21.867 "reset": true, 00:18:21.867 "compare": false, 00:18:21.867 "compare_and_write": false, 00:18:21.867 "abort": true, 00:18:21.867 "nvme_admin": false, 00:18:21.867 "nvme_io": false 00:18:21.867 }, 00:18:21.867 "memory_domains": [ 00:18:21.867 { 00:18:21.867 "dma_device_id": "system", 00:18:21.867 "dma_device_type": 1 00:18:21.867 }, 00:18:21.867 { 00:18:21.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.867 "dma_device_type": 2 00:18:21.867 } 00:18:21.867 ], 00:18:21.867 "driver_specific": {} 00:18:21.867 }' 00:18:21.867 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.126 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.386 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.386 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.386 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.386 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:22.386 19:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:22.645 "name": "BaseBdev2", 00:18:22.645 "aliases": [ 00:18:22.645 "32c979e2-c2b4-4f1e-886f-565b4eef8f42" 00:18:22.645 ], 00:18:22.645 "product_name": "Malloc disk", 00:18:22.645 "block_size": 512, 00:18:22.645 "num_blocks": 65536, 00:18:22.645 "uuid": "32c979e2-c2b4-4f1e-886f-565b4eef8f42", 00:18:22.645 "assigned_rate_limits": { 00:18:22.645 "rw_ios_per_sec": 0, 00:18:22.645 "rw_mbytes_per_sec": 0, 00:18:22.645 "r_mbytes_per_sec": 0, 00:18:22.645 "w_mbytes_per_sec": 0 00:18:22.645 }, 00:18:22.645 "claimed": true, 00:18:22.645 "claim_type": "exclusive_write", 00:18:22.645 "zoned": false, 00:18:22.645 "supported_io_types": { 00:18:22.645 "read": true, 00:18:22.645 "write": true, 00:18:22.645 "unmap": true, 00:18:22.645 "write_zeroes": true, 00:18:22.645 "flush": true, 00:18:22.645 "reset": true, 00:18:22.645 "compare": false, 00:18:22.645 "compare_and_write": false, 00:18:22.645 "abort": true, 00:18:22.645 "nvme_admin": false, 00:18:22.645 "nvme_io": false 00:18:22.645 }, 00:18:22.645 "memory_domains": [ 00:18:22.645 { 00:18:22.645 "dma_device_id": "system", 00:18:22.645 "dma_device_type": 1 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.645 "dma_device_type": 2 00:18:22.645 } 00:18:22.645 ], 00:18:22.645 "driver_specific": {} 00:18:22.645 }' 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.645 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:22.904 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:23.163 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:23.163 "name": "BaseBdev3", 00:18:23.163 "aliases": [ 00:18:23.163 "edd23cb4-1027-4cea-83ed-5e9f01e962e4" 00:18:23.163 ], 00:18:23.163 "product_name": "Malloc disk", 00:18:23.163 "block_size": 512, 00:18:23.163 "num_blocks": 65536, 00:18:23.163 "uuid": "edd23cb4-1027-4cea-83ed-5e9f01e962e4", 00:18:23.163 "assigned_rate_limits": { 00:18:23.163 "rw_ios_per_sec": 0, 00:18:23.163 "rw_mbytes_per_sec": 0, 00:18:23.163 "r_mbytes_per_sec": 0, 00:18:23.163 "w_mbytes_per_sec": 0 00:18:23.163 }, 00:18:23.163 "claimed": true, 00:18:23.163 "claim_type": "exclusive_write", 00:18:23.163 "zoned": false, 00:18:23.163 "supported_io_types": { 00:18:23.163 "read": true, 00:18:23.163 "write": true, 00:18:23.163 "unmap": true, 00:18:23.163 "write_zeroes": true, 00:18:23.163 "flush": true, 00:18:23.163 "reset": true, 00:18:23.163 "compare": false, 00:18:23.163 "compare_and_write": false, 00:18:23.163 "abort": true, 00:18:23.163 "nvme_admin": false, 00:18:23.163 "nvme_io": false 00:18:23.163 }, 00:18:23.163 "memory_domains": [ 00:18:23.163 { 00:18:23.163 "dma_device_id": "system", 00:18:23.163 "dma_device_type": 1 00:18:23.163 }, 00:18:23.163 { 00:18:23.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.163 "dma_device_type": 2 00:18:23.163 } 00:18:23.163 ], 00:18:23.163 "driver_specific": {} 00:18:23.163 }' 00:18:23.163 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.163 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.163 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:23.163 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.163 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.422 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:23.422 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.422 19:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.422 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:23.422 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.422 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.422 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:23.422 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.682 [2024-06-10 19:02:38.299995] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.682 [2024-06-10 19:02:38.300019] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.682 [2024-06-10 19:02:38.300064] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.682 [2024-06-10 19:02:38.300299] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.682 [2024-06-10 19:02:38.300310] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x250f340 name Existed_Raid, state offline 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1679131 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1679131 ']' 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1679131 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1679131 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1679131' 00:18:23.682 killing process with pid 1679131 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1679131 00:18:23.682 [2024-06-10 19:02:38.376915] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.682 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1679131 00:18:23.682 [2024-06-10 19:02:38.400695] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:23.942 19:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:23.942 00:18:23.942 real 0m26.587s 00:18:23.942 user 0m48.746s 00:18:23.942 sys 0m4.861s 00:18:23.942 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:23.942 19:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.942 ************************************ 00:18:23.942 END TEST raid_state_function_test_sb 00:18:23.942 ************************************ 00:18:23.942 19:02:38 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:23.942 19:02:38 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:18:23.942 19:02:38 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:23.942 19:02:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.942 ************************************ 00:18:23.942 START TEST raid_superblock_test 00:18:23.942 ************************************ 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 3 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1684264 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1684264 /var/tmp/spdk-raid.sock 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1684264 ']' 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:23.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:23.942 19:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.202 [2024-06-10 19:02:38.736006] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:18:24.202 [2024-06-10 19:02:38.736062] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684264 ] 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.0 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.1 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.2 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.3 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.4 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.5 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.6 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:01.7 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.0 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.1 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.2 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.3 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.4 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.5 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.6 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b6:02.7 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.0 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.1 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.2 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.3 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.4 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.5 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.6 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:01.7 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.0 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.1 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.2 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.3 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.4 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.5 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.6 cannot be used 00:18:24.202 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:24.202 EAL: Requested device 0000:b8:02.7 cannot be used 00:18:24.203 [2024-06-10 19:02:38.867782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.203 [2024-06-10 19:02:38.954038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.462 [2024-06-10 19:02:39.013950] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.462 [2024-06-10 19:02:39.014002] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:25.031 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:25.290 malloc1 00:18:25.290 19:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.290 [2024-06-10 19:02:40.029886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.290 [2024-06-10 19:02:40.029934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.290 [2024-06-10 19:02:40.029953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d31b70 00:18:25.291 [2024-06-10 19:02:40.029965] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.291 [2024-06-10 19:02:40.031435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.291 [2024-06-10 19:02:40.031464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.291 pt1 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:25.550 malloc2 00:18:25.550 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.810 [2024-06-10 19:02:40.495634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.810 [2024-06-10 19:02:40.495675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.810 [2024-06-10 19:02:40.495690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d32f70 00:18:25.810 [2024-06-10 19:02:40.495702] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.810 [2024-06-10 19:02:40.497057] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.810 [2024-06-10 19:02:40.497082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.810 pt2 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:25.810 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:26.070 malloc3 00:18:26.070 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:26.329 [2024-06-10 19:02:40.957161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:26.329 [2024-06-10 19:02:40.957203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.329 [2024-06-10 19:02:40.957218] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec9940 00:18:26.329 [2024-06-10 19:02:40.957230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.329 [2024-06-10 19:02:40.958518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.329 [2024-06-10 19:02:40.958545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:26.329 pt3 00:18:26.329 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:26.329 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:26.329 19:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:26.589 [2024-06-10 19:02:41.189783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:26.589 [2024-06-10 19:02:41.190893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.589 [2024-06-10 19:02:41.190942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:26.589 [2024-06-10 19:02:41.191088] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d2a210 00:18:26.589 [2024-06-10 19:02:41.191098] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:26.589 [2024-06-10 19:02:41.191265] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d31840 00:18:26.589 [2024-06-10 19:02:41.191397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d2a210 00:18:26.589 [2024-06-10 19:02:41.191406] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d2a210 00:18:26.589 [2024-06-10 19:02:41.191491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.589 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.848 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.848 "name": "raid_bdev1", 00:18:26.848 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:26.848 "strip_size_kb": 0, 00:18:26.848 "state": "online", 00:18:26.848 "raid_level": "raid1", 00:18:26.848 "superblock": true, 00:18:26.848 "num_base_bdevs": 3, 00:18:26.848 "num_base_bdevs_discovered": 3, 00:18:26.848 "num_base_bdevs_operational": 3, 00:18:26.848 "base_bdevs_list": [ 00:18:26.848 { 00:18:26.848 "name": "pt1", 00:18:26.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.848 "is_configured": true, 00:18:26.848 "data_offset": 2048, 00:18:26.848 "data_size": 63488 00:18:26.848 }, 00:18:26.848 { 00:18:26.848 "name": "pt2", 00:18:26.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.848 "is_configured": true, 00:18:26.848 "data_offset": 2048, 00:18:26.848 "data_size": 63488 00:18:26.848 }, 00:18:26.848 { 00:18:26.848 "name": "pt3", 00:18:26.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:26.848 "is_configured": true, 00:18:26.848 "data_offset": 2048, 00:18:26.848 "data_size": 63488 00:18:26.848 } 00:18:26.848 ] 00:18:26.848 }' 00:18:26.848 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.848 19:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:27.417 19:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:27.417 [2024-06-10 19:02:42.160519] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.675 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:27.675 "name": "raid_bdev1", 00:18:27.675 "aliases": [ 00:18:27.675 "7f99a4e5-7cf2-4931-b25d-ab31b6867e76" 00:18:27.675 ], 00:18:27.675 "product_name": "Raid Volume", 00:18:27.675 "block_size": 512, 00:18:27.675 "num_blocks": 63488, 00:18:27.675 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:27.675 "assigned_rate_limits": { 00:18:27.675 "rw_ios_per_sec": 0, 00:18:27.675 "rw_mbytes_per_sec": 0, 00:18:27.675 "r_mbytes_per_sec": 0, 00:18:27.675 "w_mbytes_per_sec": 0 00:18:27.675 }, 00:18:27.675 "claimed": false, 00:18:27.675 "zoned": false, 00:18:27.675 "supported_io_types": { 00:18:27.675 "read": true, 00:18:27.675 "write": true, 00:18:27.675 "unmap": false, 00:18:27.675 "write_zeroes": true, 00:18:27.675 "flush": false, 00:18:27.675 "reset": true, 00:18:27.675 "compare": false, 00:18:27.675 "compare_and_write": false, 00:18:27.675 "abort": false, 00:18:27.675 "nvme_admin": false, 00:18:27.675 "nvme_io": false 00:18:27.675 }, 00:18:27.675 "memory_domains": [ 00:18:27.675 { 00:18:27.675 "dma_device_id": "system", 00:18:27.675 "dma_device_type": 1 00:18:27.675 }, 00:18:27.675 { 00:18:27.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.675 "dma_device_type": 2 00:18:27.675 }, 00:18:27.675 { 00:18:27.676 "dma_device_id": "system", 00:18:27.676 "dma_device_type": 1 00:18:27.676 }, 00:18:27.676 { 00:18:27.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.676 "dma_device_type": 2 00:18:27.676 }, 00:18:27.676 { 00:18:27.676 "dma_device_id": "system", 00:18:27.676 "dma_device_type": 1 00:18:27.676 }, 00:18:27.676 { 00:18:27.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.676 "dma_device_type": 2 00:18:27.676 } 00:18:27.676 ], 00:18:27.676 "driver_specific": { 00:18:27.676 "raid": { 00:18:27.676 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:27.676 "strip_size_kb": 0, 00:18:27.676 "state": "online", 00:18:27.676 "raid_level": "raid1", 00:18:27.676 "superblock": true, 00:18:27.676 "num_base_bdevs": 3, 00:18:27.676 "num_base_bdevs_discovered": 3, 00:18:27.676 "num_base_bdevs_operational": 3, 00:18:27.676 "base_bdevs_list": [ 00:18:27.676 { 00:18:27.676 "name": "pt1", 00:18:27.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.676 "is_configured": true, 00:18:27.676 "data_offset": 2048, 00:18:27.676 "data_size": 63488 00:18:27.676 }, 00:18:27.676 { 00:18:27.676 "name": "pt2", 00:18:27.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.676 "is_configured": true, 00:18:27.676 "data_offset": 2048, 00:18:27.676 "data_size": 63488 00:18:27.676 }, 00:18:27.676 { 00:18:27.676 "name": "pt3", 00:18:27.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.676 "is_configured": true, 00:18:27.676 "data_offset": 2048, 00:18:27.676 "data_size": 63488 00:18:27.676 } 00:18:27.676 ] 00:18:27.676 } 00:18:27.676 } 00:18:27.676 }' 00:18:27.676 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.676 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:27.676 pt2 00:18:27.676 pt3' 00:18:27.676 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.676 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:27.676 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:27.934 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:27.934 "name": "pt1", 00:18:27.934 "aliases": [ 00:18:27.934 "00000000-0000-0000-0000-000000000001" 00:18:27.934 ], 00:18:27.934 "product_name": "passthru", 00:18:27.934 "block_size": 512, 00:18:27.934 "num_blocks": 65536, 00:18:27.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.934 "assigned_rate_limits": { 00:18:27.934 "rw_ios_per_sec": 0, 00:18:27.934 "rw_mbytes_per_sec": 0, 00:18:27.934 "r_mbytes_per_sec": 0, 00:18:27.934 "w_mbytes_per_sec": 0 00:18:27.934 }, 00:18:27.934 "claimed": true, 00:18:27.934 "claim_type": "exclusive_write", 00:18:27.934 "zoned": false, 00:18:27.934 "supported_io_types": { 00:18:27.934 "read": true, 00:18:27.934 "write": true, 00:18:27.934 "unmap": true, 00:18:27.934 "write_zeroes": true, 00:18:27.934 "flush": true, 00:18:27.934 "reset": true, 00:18:27.934 "compare": false, 00:18:27.934 "compare_and_write": false, 00:18:27.934 "abort": true, 00:18:27.934 "nvme_admin": false, 00:18:27.934 "nvme_io": false 00:18:27.934 }, 00:18:27.934 "memory_domains": [ 00:18:27.934 { 00:18:27.934 "dma_device_id": "system", 00:18:27.934 "dma_device_type": 1 00:18:27.934 }, 00:18:27.934 { 00:18:27.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.934 "dma_device_type": 2 00:18:27.934 } 00:18:27.934 ], 00:18:27.934 "driver_specific": { 00:18:27.934 "passthru": { 00:18:27.934 "name": "pt1", 00:18:27.934 "base_bdev_name": "malloc1" 00:18:27.934 } 00:18:27.934 } 00:18:27.934 }' 00:18:27.934 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.934 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:27.934 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:27.934 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.934 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:27.935 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:27.935 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.935 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:28.194 19:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.453 "name": "pt2", 00:18:28.453 "aliases": [ 00:18:28.453 "00000000-0000-0000-0000-000000000002" 00:18:28.453 ], 00:18:28.453 "product_name": "passthru", 00:18:28.453 "block_size": 512, 00:18:28.453 "num_blocks": 65536, 00:18:28.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.453 "assigned_rate_limits": { 00:18:28.453 "rw_ios_per_sec": 0, 00:18:28.453 "rw_mbytes_per_sec": 0, 00:18:28.453 "r_mbytes_per_sec": 0, 00:18:28.453 "w_mbytes_per_sec": 0 00:18:28.453 }, 00:18:28.453 "claimed": true, 00:18:28.453 "claim_type": "exclusive_write", 00:18:28.453 "zoned": false, 00:18:28.453 "supported_io_types": { 00:18:28.453 "read": true, 00:18:28.453 "write": true, 00:18:28.453 "unmap": true, 00:18:28.453 "write_zeroes": true, 00:18:28.453 "flush": true, 00:18:28.453 "reset": true, 00:18:28.453 "compare": false, 00:18:28.453 "compare_and_write": false, 00:18:28.453 "abort": true, 00:18:28.453 "nvme_admin": false, 00:18:28.453 "nvme_io": false 00:18:28.453 }, 00:18:28.453 "memory_domains": [ 00:18:28.453 { 00:18:28.453 "dma_device_id": "system", 00:18:28.453 "dma_device_type": 1 00:18:28.453 }, 00:18:28.453 { 00:18:28.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.453 "dma_device_type": 2 00:18:28.453 } 00:18:28.453 ], 00:18:28.453 "driver_specific": { 00:18:28.453 "passthru": { 00:18:28.453 "name": "pt2", 00:18:28.453 "base_bdev_name": "malloc2" 00:18:28.453 } 00:18:28.453 } 00:18:28.453 }' 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.453 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:28.712 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.971 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.971 "name": "pt3", 00:18:28.971 "aliases": [ 00:18:28.971 "00000000-0000-0000-0000-000000000003" 00:18:28.971 ], 00:18:28.971 "product_name": "passthru", 00:18:28.971 "block_size": 512, 00:18:28.971 "num_blocks": 65536, 00:18:28.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.971 "assigned_rate_limits": { 00:18:28.971 "rw_ios_per_sec": 0, 00:18:28.972 "rw_mbytes_per_sec": 0, 00:18:28.972 "r_mbytes_per_sec": 0, 00:18:28.972 "w_mbytes_per_sec": 0 00:18:28.972 }, 00:18:28.972 "claimed": true, 00:18:28.972 "claim_type": "exclusive_write", 00:18:28.972 "zoned": false, 00:18:28.972 "supported_io_types": { 00:18:28.972 "read": true, 00:18:28.972 "write": true, 00:18:28.972 "unmap": true, 00:18:28.972 "write_zeroes": true, 00:18:28.972 "flush": true, 00:18:28.972 "reset": true, 00:18:28.972 "compare": false, 00:18:28.972 "compare_and_write": false, 00:18:28.972 "abort": true, 00:18:28.972 "nvme_admin": false, 00:18:28.972 "nvme_io": false 00:18:28.972 }, 00:18:28.972 "memory_domains": [ 00:18:28.972 { 00:18:28.972 "dma_device_id": "system", 00:18:28.972 "dma_device_type": 1 00:18:28.972 }, 00:18:28.972 { 00:18:28.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.972 "dma_device_type": 2 00:18:28.972 } 00:18:28.972 ], 00:18:28.972 "driver_specific": { 00:18:28.972 "passthru": { 00:18:28.972 "name": "pt3", 00:18:28.972 "base_bdev_name": "malloc3" 00:18:28.972 } 00:18:28.972 } 00:18:28.972 }' 00:18:28.972 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.972 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.972 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.972 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.972 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:29.231 19:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:29.491 [2024-06-10 19:02:44.053494] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.491 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7f99a4e5-7cf2-4931-b25d-ab31b6867e76 00:18:29.491 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7f99a4e5-7cf2-4931-b25d-ab31b6867e76 ']' 00:18:29.491 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:29.751 [2024-06-10 19:02:44.269843] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.751 [2024-06-10 19:02:44.269861] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.751 [2024-06-10 19:02:44.269907] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.751 [2024-06-10 19:02:44.269973] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.751 [2024-06-10 19:02:44.269985] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d2a210 name raid_bdev1, state offline 00:18:29.751 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.751 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:30.010 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:30.010 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:30.010 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.010 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:30.010 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.010 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:30.269 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:30.269 19:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:30.529 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:30.529 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:30.789 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:31.048 [2024-06-10 19:02:45.549255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:31.048 [2024-06-10 19:02:45.550519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:31.048 [2024-06-10 19:02:45.550567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:31.048 [2024-06-10 19:02:45.550619] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:31.048 [2024-06-10 19:02:45.550657] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:31.048 [2024-06-10 19:02:45.550678] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:31.048 [2024-06-10 19:02:45.550694] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.048 [2024-06-10 19:02:45.550703] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d32010 name raid_bdev1, state configuring 00:18:31.048 request: 00:18:31.048 { 00:18:31.048 "name": "raid_bdev1", 00:18:31.048 "raid_level": "raid1", 00:18:31.048 "base_bdevs": [ 00:18:31.048 "malloc1", 00:18:31.048 "malloc2", 00:18:31.048 "malloc3" 00:18:31.048 ], 00:18:31.048 "superblock": false, 00:18:31.048 "method": "bdev_raid_create", 00:18:31.048 "req_id": 1 00:18:31.048 } 00:18:31.048 Got JSON-RPC error response 00:18:31.048 response: 00:18:31.048 { 00:18:31.048 "code": -17, 00:18:31.048 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:31.048 } 00:18:31.048 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:18:31.048 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:31.048 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:31.048 19:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:31.048 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.049 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:31.049 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:31.049 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:31.049 19:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.308 [2024-06-10 19:02:45.990366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.308 [2024-06-10 19:02:45.990407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.308 [2024-06-10 19:02:45.990424] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d31da0 00:18:31.308 [2024-06-10 19:02:45.990436] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.308 [2024-06-10 19:02:45.991936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.308 [2024-06-10 19:02:45.991964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.308 [2024-06-10 19:02:45.992025] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.308 [2024-06-10 19:02:45.992048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.308 pt1 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.308 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.621 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.621 "name": "raid_bdev1", 00:18:31.621 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:31.621 "strip_size_kb": 0, 00:18:31.621 "state": "configuring", 00:18:31.621 "raid_level": "raid1", 00:18:31.621 "superblock": true, 00:18:31.621 "num_base_bdevs": 3, 00:18:31.621 "num_base_bdevs_discovered": 1, 00:18:31.621 "num_base_bdevs_operational": 3, 00:18:31.621 "base_bdevs_list": [ 00:18:31.621 { 00:18:31.621 "name": "pt1", 00:18:31.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.621 "is_configured": true, 00:18:31.621 "data_offset": 2048, 00:18:31.621 "data_size": 63488 00:18:31.621 }, 00:18:31.621 { 00:18:31.621 "name": null, 00:18:31.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.621 "is_configured": false, 00:18:31.621 "data_offset": 2048, 00:18:31.621 "data_size": 63488 00:18:31.621 }, 00:18:31.621 { 00:18:31.621 "name": null, 00:18:31.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.621 "is_configured": false, 00:18:31.621 "data_offset": 2048, 00:18:31.621 "data_size": 63488 00:18:31.621 } 00:18:31.621 ] 00:18:31.621 }' 00:18:31.621 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.621 19:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.264 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:18:32.264 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.264 [2024-06-10 19:02:46.924842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.264 [2024-06-10 19:02:46.924889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.264 [2024-06-10 19:02:46.924907] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d29700 00:18:32.264 [2024-06-10 19:02:46.924918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.264 [2024-06-10 19:02:46.925236] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.264 [2024-06-10 19:02:46.925252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.264 [2024-06-10 19:02:46.925311] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:32.264 [2024-06-10 19:02:46.925329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.264 pt2 00:18:32.264 19:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:32.524 [2024-06-10 19:02:47.137417] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.524 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.783 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.783 "name": "raid_bdev1", 00:18:32.783 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:32.784 "strip_size_kb": 0, 00:18:32.784 "state": "configuring", 00:18:32.784 "raid_level": "raid1", 00:18:32.784 "superblock": true, 00:18:32.784 "num_base_bdevs": 3, 00:18:32.784 "num_base_bdevs_discovered": 1, 00:18:32.784 "num_base_bdevs_operational": 3, 00:18:32.784 "base_bdevs_list": [ 00:18:32.784 { 00:18:32.784 "name": "pt1", 00:18:32.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.784 "is_configured": true, 00:18:32.784 "data_offset": 2048, 00:18:32.784 "data_size": 63488 00:18:32.784 }, 00:18:32.784 { 00:18:32.784 "name": null, 00:18:32.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.784 "is_configured": false, 00:18:32.784 "data_offset": 2048, 00:18:32.784 "data_size": 63488 00:18:32.784 }, 00:18:32.784 { 00:18:32.784 "name": null, 00:18:32.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.784 "is_configured": false, 00:18:32.784 "data_offset": 2048, 00:18:32.784 "data_size": 63488 00:18:32.784 } 00:18:32.784 ] 00:18:32.784 }' 00:18:32.784 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.784 19:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.352 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:33.352 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:33.352 19:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.611 [2024-06-10 19:02:48.156089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.611 [2024-06-10 19:02:48.156139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.611 [2024-06-10 19:02:48.156155] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d284c0 00:18:33.611 [2024-06-10 19:02:48.156167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.611 [2024-06-10 19:02:48.156493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.611 [2024-06-10 19:02:48.156508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.611 [2024-06-10 19:02:48.156567] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:33.611 [2024-06-10 19:02:48.156595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.611 pt2 00:18:33.611 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:33.611 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:33.611 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:33.871 [2024-06-10 19:02:48.388702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:33.871 [2024-06-10 19:02:48.388738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.871 [2024-06-10 19:02:48.388755] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d28fe0 00:18:33.871 [2024-06-10 19:02:48.388766] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.871 [2024-06-10 19:02:48.389052] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.871 [2024-06-10 19:02:48.389068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:33.871 [2024-06-10 19:02:48.389119] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:33.871 [2024-06-10 19:02:48.389135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:33.871 [2024-06-10 19:02:48.389238] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d287e0 00:18:33.871 [2024-06-10 19:02:48.389248] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:33.871 [2024-06-10 19:02:48.389401] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2d960 00:18:33.871 [2024-06-10 19:02:48.389527] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d287e0 00:18:33.871 [2024-06-10 19:02:48.389536] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d287e0 00:18:33.871 [2024-06-10 19:02:48.389643] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.871 pt3 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.871 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.131 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.131 "name": "raid_bdev1", 00:18:34.131 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:34.131 "strip_size_kb": 0, 00:18:34.131 "state": "online", 00:18:34.131 "raid_level": "raid1", 00:18:34.131 "superblock": true, 00:18:34.131 "num_base_bdevs": 3, 00:18:34.131 "num_base_bdevs_discovered": 3, 00:18:34.131 "num_base_bdevs_operational": 3, 00:18:34.131 "base_bdevs_list": [ 00:18:34.131 { 00:18:34.131 "name": "pt1", 00:18:34.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.131 "is_configured": true, 00:18:34.131 "data_offset": 2048, 00:18:34.131 "data_size": 63488 00:18:34.131 }, 00:18:34.131 { 00:18:34.131 "name": "pt2", 00:18:34.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.131 "is_configured": true, 00:18:34.131 "data_offset": 2048, 00:18:34.131 "data_size": 63488 00:18:34.131 }, 00:18:34.131 { 00:18:34.131 "name": "pt3", 00:18:34.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:34.131 "is_configured": true, 00:18:34.131 "data_offset": 2048, 00:18:34.131 "data_size": 63488 00:18:34.131 } 00:18:34.131 ] 00:18:34.131 }' 00:18:34.131 19:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.131 19:02:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:34.699 [2024-06-10 19:02:49.355666] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:34.699 "name": "raid_bdev1", 00:18:34.699 "aliases": [ 00:18:34.699 "7f99a4e5-7cf2-4931-b25d-ab31b6867e76" 00:18:34.699 ], 00:18:34.699 "product_name": "Raid Volume", 00:18:34.699 "block_size": 512, 00:18:34.699 "num_blocks": 63488, 00:18:34.699 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:34.699 "assigned_rate_limits": { 00:18:34.699 "rw_ios_per_sec": 0, 00:18:34.699 "rw_mbytes_per_sec": 0, 00:18:34.699 "r_mbytes_per_sec": 0, 00:18:34.699 "w_mbytes_per_sec": 0 00:18:34.699 }, 00:18:34.699 "claimed": false, 00:18:34.699 "zoned": false, 00:18:34.699 "supported_io_types": { 00:18:34.699 "read": true, 00:18:34.699 "write": true, 00:18:34.699 "unmap": false, 00:18:34.699 "write_zeroes": true, 00:18:34.699 "flush": false, 00:18:34.699 "reset": true, 00:18:34.699 "compare": false, 00:18:34.699 "compare_and_write": false, 00:18:34.699 "abort": false, 00:18:34.699 "nvme_admin": false, 00:18:34.699 "nvme_io": false 00:18:34.699 }, 00:18:34.699 "memory_domains": [ 00:18:34.699 { 00:18:34.699 "dma_device_id": "system", 00:18:34.699 "dma_device_type": 1 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.699 "dma_device_type": 2 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "dma_device_id": "system", 00:18:34.699 "dma_device_type": 1 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.699 "dma_device_type": 2 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "dma_device_id": "system", 00:18:34.699 "dma_device_type": 1 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.699 "dma_device_type": 2 00:18:34.699 } 00:18:34.699 ], 00:18:34.699 "driver_specific": { 00:18:34.699 "raid": { 00:18:34.699 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:34.699 "strip_size_kb": 0, 00:18:34.699 "state": "online", 00:18:34.699 "raid_level": "raid1", 00:18:34.699 "superblock": true, 00:18:34.699 "num_base_bdevs": 3, 00:18:34.699 "num_base_bdevs_discovered": 3, 00:18:34.699 "num_base_bdevs_operational": 3, 00:18:34.699 "base_bdevs_list": [ 00:18:34.699 { 00:18:34.699 "name": "pt1", 00:18:34.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.699 "is_configured": true, 00:18:34.699 "data_offset": 2048, 00:18:34.699 "data_size": 63488 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "name": "pt2", 00:18:34.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.699 "is_configured": true, 00:18:34.699 "data_offset": 2048, 00:18:34.699 "data_size": 63488 00:18:34.699 }, 00:18:34.699 { 00:18:34.699 "name": "pt3", 00:18:34.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:34.699 "is_configured": true, 00:18:34.699 "data_offset": 2048, 00:18:34.699 "data_size": 63488 00:18:34.699 } 00:18:34.699 ] 00:18:34.699 } 00:18:34.699 } 00:18:34.699 }' 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:34.699 pt2 00:18:34.699 pt3' 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:34.699 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:34.959 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:34.959 "name": "pt1", 00:18:34.959 "aliases": [ 00:18:34.959 "00000000-0000-0000-0000-000000000001" 00:18:34.959 ], 00:18:34.959 "product_name": "passthru", 00:18:34.959 "block_size": 512, 00:18:34.959 "num_blocks": 65536, 00:18:34.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.959 "assigned_rate_limits": { 00:18:34.959 "rw_ios_per_sec": 0, 00:18:34.959 "rw_mbytes_per_sec": 0, 00:18:34.959 "r_mbytes_per_sec": 0, 00:18:34.959 "w_mbytes_per_sec": 0 00:18:34.959 }, 00:18:34.959 "claimed": true, 00:18:34.959 "claim_type": "exclusive_write", 00:18:34.959 "zoned": false, 00:18:34.959 "supported_io_types": { 00:18:34.959 "read": true, 00:18:34.959 "write": true, 00:18:34.959 "unmap": true, 00:18:34.959 "write_zeroes": true, 00:18:34.959 "flush": true, 00:18:34.959 "reset": true, 00:18:34.959 "compare": false, 00:18:34.959 "compare_and_write": false, 00:18:34.959 "abort": true, 00:18:34.959 "nvme_admin": false, 00:18:34.959 "nvme_io": false 00:18:34.959 }, 00:18:34.959 "memory_domains": [ 00:18:34.959 { 00:18:34.959 "dma_device_id": "system", 00:18:34.959 "dma_device_type": 1 00:18:34.959 }, 00:18:34.959 { 00:18:34.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.959 "dma_device_type": 2 00:18:34.959 } 00:18:34.959 ], 00:18:34.959 "driver_specific": { 00:18:34.959 "passthru": { 00:18:34.959 "name": "pt1", 00:18:34.959 "base_bdev_name": "malloc1" 00:18:34.959 } 00:18:34.959 } 00:18:34.959 }' 00:18:34.959 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:34.959 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.219 19:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.477 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:35.477 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:35.477 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:35.477 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:35.736 "name": "pt2", 00:18:35.736 "aliases": [ 00:18:35.736 "00000000-0000-0000-0000-000000000002" 00:18:35.736 ], 00:18:35.736 "product_name": "passthru", 00:18:35.736 "block_size": 512, 00:18:35.736 "num_blocks": 65536, 00:18:35.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.736 "assigned_rate_limits": { 00:18:35.736 "rw_ios_per_sec": 0, 00:18:35.736 "rw_mbytes_per_sec": 0, 00:18:35.736 "r_mbytes_per_sec": 0, 00:18:35.736 "w_mbytes_per_sec": 0 00:18:35.736 }, 00:18:35.736 "claimed": true, 00:18:35.736 "claim_type": "exclusive_write", 00:18:35.736 "zoned": false, 00:18:35.736 "supported_io_types": { 00:18:35.736 "read": true, 00:18:35.736 "write": true, 00:18:35.736 "unmap": true, 00:18:35.736 "write_zeroes": true, 00:18:35.736 "flush": true, 00:18:35.736 "reset": true, 00:18:35.736 "compare": false, 00:18:35.736 "compare_and_write": false, 00:18:35.736 "abort": true, 00:18:35.736 "nvme_admin": false, 00:18:35.736 "nvme_io": false 00:18:35.736 }, 00:18:35.736 "memory_domains": [ 00:18:35.736 { 00:18:35.736 "dma_device_id": "system", 00:18:35.736 "dma_device_type": 1 00:18:35.736 }, 00:18:35.736 { 00:18:35.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.736 "dma_device_type": 2 00:18:35.736 } 00:18:35.736 ], 00:18:35.736 "driver_specific": { 00:18:35.736 "passthru": { 00:18:35.736 "name": "pt2", 00:18:35.736 "base_bdev_name": "malloc2" 00:18:35.736 } 00:18:35.736 } 00:18:35.736 }' 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:35.736 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.995 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.995 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:35.995 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:35.995 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:35.995 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.254 "name": "pt3", 00:18:36.254 "aliases": [ 00:18:36.254 "00000000-0000-0000-0000-000000000003" 00:18:36.254 ], 00:18:36.254 "product_name": "passthru", 00:18:36.254 "block_size": 512, 00:18:36.254 "num_blocks": 65536, 00:18:36.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.254 "assigned_rate_limits": { 00:18:36.254 "rw_ios_per_sec": 0, 00:18:36.254 "rw_mbytes_per_sec": 0, 00:18:36.254 "r_mbytes_per_sec": 0, 00:18:36.254 "w_mbytes_per_sec": 0 00:18:36.254 }, 00:18:36.254 "claimed": true, 00:18:36.254 "claim_type": "exclusive_write", 00:18:36.254 "zoned": false, 00:18:36.254 "supported_io_types": { 00:18:36.254 "read": true, 00:18:36.254 "write": true, 00:18:36.254 "unmap": true, 00:18:36.254 "write_zeroes": true, 00:18:36.254 "flush": true, 00:18:36.254 "reset": true, 00:18:36.254 "compare": false, 00:18:36.254 "compare_and_write": false, 00:18:36.254 "abort": true, 00:18:36.254 "nvme_admin": false, 00:18:36.254 "nvme_io": false 00:18:36.254 }, 00:18:36.254 "memory_domains": [ 00:18:36.254 { 00:18:36.254 "dma_device_id": "system", 00:18:36.254 "dma_device_type": 1 00:18:36.254 }, 00:18:36.254 { 00:18:36.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.254 "dma_device_type": 2 00:18:36.254 } 00:18:36.254 ], 00:18:36.254 "driver_specific": { 00:18:36.254 "passthru": { 00:18:36.254 "name": "pt3", 00:18:36.254 "base_bdev_name": "malloc3" 00:18:36.254 } 00:18:36.254 } 00:18:36.254 }' 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:36.254 19:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.254 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:36.513 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.513 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.513 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:36.513 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:36.513 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:36.513 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:36.772 [2024-06-10 19:02:51.332937] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.772 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7f99a4e5-7cf2-4931-b25d-ab31b6867e76 '!=' 7f99a4e5-7cf2-4931-b25d-ab31b6867e76 ']' 00:18:36.772 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:36.772 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:36.772 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:36.772 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:37.031 [2024-06-10 19:02:51.561325] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.031 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.290 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.290 "name": "raid_bdev1", 00:18:37.290 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:37.290 "strip_size_kb": 0, 00:18:37.290 "state": "online", 00:18:37.290 "raid_level": "raid1", 00:18:37.290 "superblock": true, 00:18:37.290 "num_base_bdevs": 3, 00:18:37.290 "num_base_bdevs_discovered": 2, 00:18:37.290 "num_base_bdevs_operational": 2, 00:18:37.290 "base_bdevs_list": [ 00:18:37.290 { 00:18:37.290 "name": null, 00:18:37.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.290 "is_configured": false, 00:18:37.290 "data_offset": 2048, 00:18:37.290 "data_size": 63488 00:18:37.290 }, 00:18:37.290 { 00:18:37.290 "name": "pt2", 00:18:37.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.290 "is_configured": true, 00:18:37.290 "data_offset": 2048, 00:18:37.290 "data_size": 63488 00:18:37.290 }, 00:18:37.290 { 00:18:37.290 "name": "pt3", 00:18:37.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.290 "is_configured": true, 00:18:37.290 "data_offset": 2048, 00:18:37.290 "data_size": 63488 00:18:37.290 } 00:18:37.290 ] 00:18:37.290 }' 00:18:37.290 19:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.290 19:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.857 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:37.857 [2024-06-10 19:02:52.583992] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.857 [2024-06-10 19:02:52.584018] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.857 [2024-06-10 19:02:52.584070] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.857 [2024-06-10 19:02:52.584118] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.857 [2024-06-10 19:02:52.584129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d287e0 name raid_bdev1, state offline 00:18:37.857 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:37.857 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.116 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:38.116 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:38.116 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:38.116 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:38.116 19:02:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:38.375 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:38.375 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:38.375 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:38.635 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:38.635 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:38.635 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:38.635 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:38.635 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.894 [2024-06-10 19:02:53.486311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.894 [2024-06-10 19:02:53.486357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.894 [2024-06-10 19:02:53.486373] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ecb0f0 00:18:38.894 [2024-06-10 19:02:53.486384] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.894 [2024-06-10 19:02:53.487950] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.894 [2024-06-10 19:02:53.487980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.894 [2024-06-10 19:02:53.488046] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.894 [2024-06-10 19:02:53.488071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.894 pt2 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.894 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.153 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.153 "name": "raid_bdev1", 00:18:39.153 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:39.153 "strip_size_kb": 0, 00:18:39.153 "state": "configuring", 00:18:39.153 "raid_level": "raid1", 00:18:39.153 "superblock": true, 00:18:39.153 "num_base_bdevs": 3, 00:18:39.153 "num_base_bdevs_discovered": 1, 00:18:39.153 "num_base_bdevs_operational": 2, 00:18:39.153 "base_bdevs_list": [ 00:18:39.153 { 00:18:39.153 "name": null, 00:18:39.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.153 "is_configured": false, 00:18:39.153 "data_offset": 2048, 00:18:39.153 "data_size": 63488 00:18:39.153 }, 00:18:39.153 { 00:18:39.153 "name": "pt2", 00:18:39.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.153 "is_configured": true, 00:18:39.153 "data_offset": 2048, 00:18:39.153 "data_size": 63488 00:18:39.153 }, 00:18:39.153 { 00:18:39.153 "name": null, 00:18:39.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.153 "is_configured": false, 00:18:39.153 "data_offset": 2048, 00:18:39.153 "data_size": 63488 00:18:39.153 } 00:18:39.153 ] 00:18:39.153 }' 00:18:39.153 19:02:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.153 19:02:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.721 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:39.721 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:39.721 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:18:39.721 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:39.980 [2024-06-10 19:02:54.500982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:39.980 [2024-06-10 19:02:54.501032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.980 [2024-06-10 19:02:54.501049] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d29930 00:18:39.980 [2024-06-10 19:02:54.501061] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.980 [2024-06-10 19:02:54.501381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.980 [2024-06-10 19:02:54.501396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:39.981 [2024-06-10 19:02:54.501457] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:39.981 [2024-06-10 19:02:54.501475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:39.981 [2024-06-10 19:02:54.501562] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ecaa60 00:18:39.981 [2024-06-10 19:02:54.501572] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:39.981 [2024-06-10 19:02:54.501736] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ecaa00 00:18:39.981 [2024-06-10 19:02:54.501855] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ecaa60 00:18:39.981 [2024-06-10 19:02:54.501864] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ecaa60 00:18:39.981 [2024-06-10 19:02:54.501961] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.981 pt3 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.981 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.240 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.240 "name": "raid_bdev1", 00:18:40.240 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:40.240 "strip_size_kb": 0, 00:18:40.240 "state": "online", 00:18:40.240 "raid_level": "raid1", 00:18:40.240 "superblock": true, 00:18:40.240 "num_base_bdevs": 3, 00:18:40.240 "num_base_bdevs_discovered": 2, 00:18:40.240 "num_base_bdevs_operational": 2, 00:18:40.240 "base_bdevs_list": [ 00:18:40.240 { 00:18:40.240 "name": null, 00:18:40.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.240 "is_configured": false, 00:18:40.240 "data_offset": 2048, 00:18:40.240 "data_size": 63488 00:18:40.240 }, 00:18:40.240 { 00:18:40.240 "name": "pt2", 00:18:40.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.240 "is_configured": true, 00:18:40.240 "data_offset": 2048, 00:18:40.240 "data_size": 63488 00:18:40.240 }, 00:18:40.240 { 00:18:40.240 "name": "pt3", 00:18:40.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.240 "is_configured": true, 00:18:40.240 "data_offset": 2048, 00:18:40.240 "data_size": 63488 00:18:40.240 } 00:18:40.240 ] 00:18:40.240 }' 00:18:40.240 19:02:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.240 19:02:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.808 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:40.808 [2024-06-10 19:02:55.523670] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.808 [2024-06-10 19:02:55.523695] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.808 [2024-06-10 19:02:55.523746] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.808 [2024-06-10 19:02:55.523795] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.808 [2024-06-10 19:02:55.523806] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ecaa60 name raid_bdev1, state offline 00:18:40.808 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.808 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:41.066 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:41.066 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:41.066 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:18:41.066 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:18:41.066 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:41.326 19:02:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:41.585 [2024-06-10 19:02:56.201425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:41.585 [2024-06-10 19:02:56.201470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.585 [2024-06-10 19:02:56.201486] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d2af30 00:18:41.585 [2024-06-10 19:02:56.201498] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.585 [2024-06-10 19:02:56.203011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.585 [2024-06-10 19:02:56.203038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:41.585 [2024-06-10 19:02:56.203104] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:41.585 [2024-06-10 19:02:56.203130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.585 [2024-06-10 19:02:56.203222] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:41.585 [2024-06-10 19:02:56.203234] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.585 [2024-06-10 19:02:56.203247] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ecbd00 name raid_bdev1, state configuring 00:18:41.585 [2024-06-10 19:02:56.203269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.585 pt1 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.585 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.845 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.845 "name": "raid_bdev1", 00:18:41.845 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:41.845 "strip_size_kb": 0, 00:18:41.845 "state": "configuring", 00:18:41.845 "raid_level": "raid1", 00:18:41.845 "superblock": true, 00:18:41.845 "num_base_bdevs": 3, 00:18:41.845 "num_base_bdevs_discovered": 1, 00:18:41.845 "num_base_bdevs_operational": 2, 00:18:41.845 "base_bdevs_list": [ 00:18:41.845 { 00:18:41.845 "name": null, 00:18:41.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.845 "is_configured": false, 00:18:41.845 "data_offset": 2048, 00:18:41.845 "data_size": 63488 00:18:41.845 }, 00:18:41.845 { 00:18:41.845 "name": "pt2", 00:18:41.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.845 "is_configured": true, 00:18:41.845 "data_offset": 2048, 00:18:41.845 "data_size": 63488 00:18:41.845 }, 00:18:41.845 { 00:18:41.845 "name": null, 00:18:41.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.845 "is_configured": false, 00:18:41.845 "data_offset": 2048, 00:18:41.845 "data_size": 63488 00:18:41.845 } 00:18:41.845 ] 00:18:41.845 }' 00:18:41.845 19:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.845 19:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.412 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:18:42.412 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:42.671 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:18:42.671 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:42.931 [2024-06-10 19:02:57.444714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:42.931 [2024-06-10 19:02:57.444762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.931 [2024-06-10 19:02:57.444778] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d29930 00:18:42.931 [2024-06-10 19:02:57.444790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.931 [2024-06-10 19:02:57.445110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.931 [2024-06-10 19:02:57.445126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:42.931 [2024-06-10 19:02:57.445186] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:42.931 [2024-06-10 19:02:57.445204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.931 [2024-06-10 19:02:57.445296] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d32010 00:18:42.931 [2024-06-10 19:02:57.445306] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:42.931 [2024-06-10 19:02:57.445463] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ee3d50 00:18:42.931 [2024-06-10 19:02:57.445587] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d32010 00:18:42.931 [2024-06-10 19:02:57.445597] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d32010 00:18:42.931 [2024-06-10 19:02:57.445688] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.931 pt3 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.931 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.190 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.190 "name": "raid_bdev1", 00:18:43.190 "uuid": "7f99a4e5-7cf2-4931-b25d-ab31b6867e76", 00:18:43.190 "strip_size_kb": 0, 00:18:43.190 "state": "online", 00:18:43.190 "raid_level": "raid1", 00:18:43.190 "superblock": true, 00:18:43.190 "num_base_bdevs": 3, 00:18:43.190 "num_base_bdevs_discovered": 2, 00:18:43.190 "num_base_bdevs_operational": 2, 00:18:43.190 "base_bdevs_list": [ 00:18:43.190 { 00:18:43.190 "name": null, 00:18:43.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.190 "is_configured": false, 00:18:43.190 "data_offset": 2048, 00:18:43.190 "data_size": 63488 00:18:43.190 }, 00:18:43.190 { 00:18:43.190 "name": "pt2", 00:18:43.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.190 "is_configured": true, 00:18:43.190 "data_offset": 2048, 00:18:43.190 "data_size": 63488 00:18:43.190 }, 00:18:43.190 { 00:18:43.190 "name": "pt3", 00:18:43.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.190 "is_configured": true, 00:18:43.190 "data_offset": 2048, 00:18:43.190 "data_size": 63488 00:18:43.190 } 00:18:43.190 ] 00:18:43.190 }' 00:18:43.190 19:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.190 19:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.758 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:43.758 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:43.758 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:43.758 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:43.758 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:44.017 [2024-06-10 19:02:58.700367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 7f99a4e5-7cf2-4931-b25d-ab31b6867e76 '!=' 7f99a4e5-7cf2-4931-b25d-ab31b6867e76 ']' 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1684264 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1684264 ']' 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1684264 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:44.017 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1684264 00:18:44.276 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1684264' 00:18:44.277 killing process with pid 1684264 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1684264 00:18:44.277 [2024-06-10 19:02:58.775520] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.277 [2024-06-10 19:02:58.775574] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.277 [2024-06-10 19:02:58.775638] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.277 [2024-06-10 19:02:58.775650] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d32010 name raid_bdev1, state offline 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1684264 00:18:44.277 [2024-06-10 19:02:58.798921] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:44.277 00:18:44.277 real 0m20.310s 00:18:44.277 user 0m37.143s 00:18:44.277 sys 0m3.626s 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:44.277 19:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.277 ************************************ 00:18:44.277 END TEST raid_superblock_test 00:18:44.277 ************************************ 00:18:44.277 19:02:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:18:44.536 19:02:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:44.536 19:02:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:44.536 19:02:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.536 ************************************ 00:18:44.536 START TEST raid_read_error_test 00:18:44.536 ************************************ 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 3 read 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.lu58BkGXCS 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1688184 00:18:44.536 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1688184 /var/tmp/spdk-raid.sock 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1688184 ']' 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:44.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:44.537 19:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.537 [2024-06-10 19:02:59.148331] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:18:44.537 [2024-06-10 19:02:59.148386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688184 ] 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.0 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.1 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.2 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.3 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.4 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.5 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.6 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:01.7 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.0 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.1 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.2 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.3 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.4 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.5 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.6 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b6:02.7 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.0 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.1 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.2 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.3 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.4 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.5 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.6 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:01.7 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.0 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.1 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.2 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.3 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.4 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.5 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.6 cannot be used 00:18:44.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:44.537 EAL: Requested device 0000:b8:02.7 cannot be used 00:18:44.537 [2024-06-10 19:02:59.281227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.795 [2024-06-10 19:02:59.369914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.796 [2024-06-10 19:02:59.432908] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.796 [2024-06-10 19:02:59.432944] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.362 19:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:45.362 19:03:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:18:45.362 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:45.362 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:45.621 BaseBdev1_malloc 00:18:45.621 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:45.880 true 00:18:45.880 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:46.139 [2024-06-10 19:03:00.731278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:46.139 [2024-06-10 19:03:00.731317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.139 [2024-06-10 19:03:00.731338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x230ed50 00:18:46.139 [2024-06-10 19:03:00.731350] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.139 [2024-06-10 19:03:00.732933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.139 [2024-06-10 19:03:00.732960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:46.139 BaseBdev1 00:18:46.139 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:46.139 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:46.397 BaseBdev2_malloc 00:18:46.397 19:03:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:46.656 true 00:18:46.656 19:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:46.914 [2024-06-10 19:03:01.413310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:46.914 [2024-06-10 19:03:01.413348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.914 [2024-06-10 19:03:01.413368] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23142e0 00:18:46.914 [2024-06-10 19:03:01.413380] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.914 [2024-06-10 19:03:01.414755] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.914 [2024-06-10 19:03:01.414781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:46.914 BaseBdev2 00:18:46.914 19:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:46.914 19:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:46.914 BaseBdev3_malloc 00:18:46.914 19:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:47.172 true 00:18:47.172 19:03:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:47.431 [2024-06-10 19:03:02.087399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:47.431 [2024-06-10 19:03:02.087438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.431 [2024-06-10 19:03:02.087458] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2315fd0 00:18:47.431 [2024-06-10 19:03:02.087469] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.431 [2024-06-10 19:03:02.088831] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.431 [2024-06-10 19:03:02.088856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:47.431 BaseBdev3 00:18:47.431 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:47.689 [2024-06-10 19:03:02.308008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.689 [2024-06-10 19:03:02.309111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.689 [2024-06-10 19:03:02.309174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.689 [2024-06-10 19:03:02.309365] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23173f0 00:18:47.689 [2024-06-10 19:03:02.309376] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:47.689 [2024-06-10 19:03:02.309539] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2314ea0 00:18:47.689 [2024-06-10 19:03:02.309686] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23173f0 00:18:47.689 [2024-06-10 19:03:02.309695] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23173f0 00:18:47.689 [2024-06-10 19:03:02.309789] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.689 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.947 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.947 "name": "raid_bdev1", 00:18:47.947 "uuid": "46722109-b967-4779-b2e0-31bd8ce696be", 00:18:47.947 "strip_size_kb": 0, 00:18:47.947 "state": "online", 00:18:47.947 "raid_level": "raid1", 00:18:47.947 "superblock": true, 00:18:47.947 "num_base_bdevs": 3, 00:18:47.947 "num_base_bdevs_discovered": 3, 00:18:47.947 "num_base_bdevs_operational": 3, 00:18:47.947 "base_bdevs_list": [ 00:18:47.947 { 00:18:47.947 "name": "BaseBdev1", 00:18:47.947 "uuid": "619d30e1-f964-5961-87d5-c5e0dc34a1d1", 00:18:47.947 "is_configured": true, 00:18:47.947 "data_offset": 2048, 00:18:47.947 "data_size": 63488 00:18:47.947 }, 00:18:47.947 { 00:18:47.947 "name": "BaseBdev2", 00:18:47.947 "uuid": "f8fd562f-ed8a-5a63-946f-2b58312f3746", 00:18:47.947 "is_configured": true, 00:18:47.947 "data_offset": 2048, 00:18:47.947 "data_size": 63488 00:18:47.947 }, 00:18:47.947 { 00:18:47.947 "name": "BaseBdev3", 00:18:47.947 "uuid": "b35ae933-71b6-5fec-9753-7a5fdbad25b5", 00:18:47.947 "is_configured": true, 00:18:47.947 "data_offset": 2048, 00:18:47.947 "data_size": 63488 00:18:47.947 } 00:18:47.947 ] 00:18:47.947 }' 00:18:47.947 19:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.947 19:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.514 19:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:48.514 19:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:48.514 [2024-06-10 19:03:03.234662] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x216ab20 00:18:49.450 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.709 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.966 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.966 "name": "raid_bdev1", 00:18:49.966 "uuid": "46722109-b967-4779-b2e0-31bd8ce696be", 00:18:49.966 "strip_size_kb": 0, 00:18:49.966 "state": "online", 00:18:49.966 "raid_level": "raid1", 00:18:49.966 "superblock": true, 00:18:49.966 "num_base_bdevs": 3, 00:18:49.966 "num_base_bdevs_discovered": 3, 00:18:49.966 "num_base_bdevs_operational": 3, 00:18:49.966 "base_bdevs_list": [ 00:18:49.966 { 00:18:49.966 "name": "BaseBdev1", 00:18:49.967 "uuid": "619d30e1-f964-5961-87d5-c5e0dc34a1d1", 00:18:49.967 "is_configured": true, 00:18:49.967 "data_offset": 2048, 00:18:49.967 "data_size": 63488 00:18:49.967 }, 00:18:49.967 { 00:18:49.967 "name": "BaseBdev2", 00:18:49.967 "uuid": "f8fd562f-ed8a-5a63-946f-2b58312f3746", 00:18:49.967 "is_configured": true, 00:18:49.967 "data_offset": 2048, 00:18:49.967 "data_size": 63488 00:18:49.967 }, 00:18:49.967 { 00:18:49.967 "name": "BaseBdev3", 00:18:49.967 "uuid": "b35ae933-71b6-5fec-9753-7a5fdbad25b5", 00:18:49.967 "is_configured": true, 00:18:49.967 "data_offset": 2048, 00:18:49.967 "data_size": 63488 00:18:49.967 } 00:18:49.967 ] 00:18:49.967 }' 00:18:49.967 19:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.967 19:03:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.590 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:50.856 [2024-06-10 19:03:05.410620] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.856 [2024-06-10 19:03:05.410655] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.856 [2024-06-10 19:03:05.413621] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.856 [2024-06-10 19:03:05.413654] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.856 [2024-06-10 19:03:05.413743] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.856 [2024-06-10 19:03:05.413754] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23173f0 name raid_bdev1, state offline 00:18:50.856 0 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1688184 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1688184 ']' 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1688184 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1688184 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1688184' 00:18:50.856 killing process with pid 1688184 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1688184 00:18:50.856 [2024-06-10 19:03:05.489163] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.856 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1688184 00:18:50.856 [2024-06-10 19:03:05.507688] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.lu58BkGXCS 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:51.116 00:18:51.116 real 0m6.641s 00:18:51.116 user 0m10.443s 00:18:51.116 sys 0m1.181s 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:51.116 19:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.116 ************************************ 00:18:51.116 END TEST raid_read_error_test 00:18:51.116 ************************************ 00:18:51.116 19:03:05 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:18:51.116 19:03:05 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:51.116 19:03:05 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:51.116 19:03:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.116 ************************************ 00:18:51.116 START TEST raid_write_error_test 00:18:51.116 ************************************ 00:18:51.116 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 3 write 00:18:51.116 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:51.116 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.lnbxScoMdi 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1689913 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1689913 /var/tmp/spdk-raid.sock 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1689913 ']' 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:51.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:51.117 19:03:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.376 [2024-06-10 19:03:05.873987] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:18:51.376 [2024-06-10 19:03:05.874044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689913 ] 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.0 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.1 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.2 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.3 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.4 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.5 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.6 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:01.7 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.0 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.1 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.2 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.3 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.4 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.5 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.6 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b6:02.7 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.0 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.1 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.2 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.3 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.4 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.5 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.6 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:01.7 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.0 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.1 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.2 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.3 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.4 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.5 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.6 cannot be used 00:18:51.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.377 EAL: Requested device 0000:b8:02.7 cannot be used 00:18:51.377 [2024-06-10 19:03:06.007595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.377 [2024-06-10 19:03:06.093939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.636 [2024-06-10 19:03:06.156965] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.636 [2024-06-10 19:03:06.156996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.204 19:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:52.204 19:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:18:52.204 19:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:52.204 19:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:52.464 BaseBdev1_malloc 00:18:52.464 19:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:52.464 true 00:18:52.723 19:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:52.723 [2024-06-10 19:03:07.425602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:52.723 [2024-06-10 19:03:07.425642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.723 [2024-06-10 19:03:07.425661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x203fd50 00:18:52.723 [2024-06-10 19:03:07.425674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.723 [2024-06-10 19:03:07.427248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.723 [2024-06-10 19:03:07.427275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:52.723 BaseBdev1 00:18:52.723 19:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:52.723 19:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:52.982 BaseBdev2_malloc 00:18:52.982 19:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:53.240 true 00:18:53.240 19:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:53.498 [2024-06-10 19:03:08.103792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:53.498 [2024-06-10 19:03:08.103831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.498 [2024-06-10 19:03:08.103848] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20452e0 00:18:53.498 [2024-06-10 19:03:08.103860] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.498 [2024-06-10 19:03:08.105238] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.498 [2024-06-10 19:03:08.105264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:53.498 BaseBdev2 00:18:53.498 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:53.498 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:53.756 BaseBdev3_malloc 00:18:53.756 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:54.015 true 00:18:54.015 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:54.015 [2024-06-10 19:03:08.761870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:54.015 [2024-06-10 19:03:08.761906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.015 [2024-06-10 19:03:08.761924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2046fd0 00:18:54.015 [2024-06-10 19:03:08.761936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.015 [2024-06-10 19:03:08.763239] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.015 [2024-06-10 19:03:08.763264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:54.015 BaseBdev3 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:54.274 [2024-06-10 19:03:08.982479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.274 [2024-06-10 19:03:08.983661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.274 [2024-06-10 19:03:08.983723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.274 [2024-06-10 19:03:08.983912] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20483f0 00:18:54.274 [2024-06-10 19:03:08.983924] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:54.274 [2024-06-10 19:03:08.984098] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2045ea0 00:18:54.274 [2024-06-10 19:03:08.984241] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20483f0 00:18:54.274 [2024-06-10 19:03:08.984250] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20483f0 00:18:54.274 [2024-06-10 19:03:08.984345] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:54.274 19:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.274 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.274 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.274 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.274 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.274 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.533 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.533 "name": "raid_bdev1", 00:18:54.533 "uuid": "88ad995a-b87f-4c7a-9d5c-63212e12663d", 00:18:54.533 "strip_size_kb": 0, 00:18:54.533 "state": "online", 00:18:54.533 "raid_level": "raid1", 00:18:54.533 "superblock": true, 00:18:54.533 "num_base_bdevs": 3, 00:18:54.533 "num_base_bdevs_discovered": 3, 00:18:54.533 "num_base_bdevs_operational": 3, 00:18:54.533 "base_bdevs_list": [ 00:18:54.533 { 00:18:54.533 "name": "BaseBdev1", 00:18:54.533 "uuid": "c663d0c7-ea85-5070-80f9-789d93e7e53b", 00:18:54.533 "is_configured": true, 00:18:54.533 "data_offset": 2048, 00:18:54.533 "data_size": 63488 00:18:54.533 }, 00:18:54.533 { 00:18:54.533 "name": "BaseBdev2", 00:18:54.533 "uuid": "a4761eaf-e97a-555d-8264-078dcc1d8219", 00:18:54.533 "is_configured": true, 00:18:54.533 "data_offset": 2048, 00:18:54.533 "data_size": 63488 00:18:54.533 }, 00:18:54.533 { 00:18:54.533 "name": "BaseBdev3", 00:18:54.533 "uuid": "ef2da451-c392-5f2f-82ef-6682681a10bf", 00:18:54.533 "is_configured": true, 00:18:54.533 "data_offset": 2048, 00:18:54.533 "data_size": 63488 00:18:54.533 } 00:18:54.533 ] 00:18:54.533 }' 00:18:54.533 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.533 19:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.102 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:55.102 19:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:55.361 [2024-06-10 19:03:09.897174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e9bb20 00:18:56.298 19:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:56.298 [2024-06-10 19:03:11.008629] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:56.298 [2024-06-10 19:03:11.008679] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.298 [2024-06-10 19:03:11.008859] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1e9bb20 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.298 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.557 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.557 "name": "raid_bdev1", 00:18:56.557 "uuid": "88ad995a-b87f-4c7a-9d5c-63212e12663d", 00:18:56.557 "strip_size_kb": 0, 00:18:56.557 "state": "online", 00:18:56.557 "raid_level": "raid1", 00:18:56.557 "superblock": true, 00:18:56.557 "num_base_bdevs": 3, 00:18:56.557 "num_base_bdevs_discovered": 2, 00:18:56.557 "num_base_bdevs_operational": 2, 00:18:56.557 "base_bdevs_list": [ 00:18:56.557 { 00:18:56.557 "name": null, 00:18:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.557 "is_configured": false, 00:18:56.557 "data_offset": 2048, 00:18:56.557 "data_size": 63488 00:18:56.557 }, 00:18:56.557 { 00:18:56.557 "name": "BaseBdev2", 00:18:56.557 "uuid": "a4761eaf-e97a-555d-8264-078dcc1d8219", 00:18:56.557 "is_configured": true, 00:18:56.557 "data_offset": 2048, 00:18:56.557 "data_size": 63488 00:18:56.557 }, 00:18:56.557 { 00:18:56.557 "name": "BaseBdev3", 00:18:56.557 "uuid": "ef2da451-c392-5f2f-82ef-6682681a10bf", 00:18:56.557 "is_configured": true, 00:18:56.557 "data_offset": 2048, 00:18:56.557 "data_size": 63488 00:18:56.557 } 00:18:56.557 ] 00:18:56.557 }' 00:18:56.557 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.557 19:03:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.127 19:03:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:57.386 [2024-06-10 19:03:12.030236] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.386 [2024-06-10 19:03:12.030273] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.386 [2024-06-10 19:03:12.033189] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.386 [2024-06-10 19:03:12.033217] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.386 [2024-06-10 19:03:12.033288] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.386 [2024-06-10 19:03:12.033298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20483f0 name raid_bdev1, state offline 00:18:57.386 0 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1689913 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1689913 ']' 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1689913 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1689913 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1689913' 00:18:57.386 killing process with pid 1689913 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1689913 00:18:57.386 [2024-06-10 19:03:12.105463] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.386 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1689913 00:18:57.386 [2024-06-10 19:03:12.123661] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.lnbxScoMdi 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:57.646 00:18:57.646 real 0m6.530s 00:18:57.646 user 0m10.231s 00:18:57.646 sys 0m1.187s 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:57.646 19:03:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.646 ************************************ 00:18:57.646 END TEST raid_write_error_test 00:18:57.646 ************************************ 00:18:57.646 19:03:12 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:18:57.646 19:03:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:57.646 19:03:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:57.646 19:03:12 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:57.646 19:03:12 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:57.646 19:03:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.906 ************************************ 00:18:57.906 START TEST raid_state_function_test 00:18:57.906 ************************************ 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 4 false 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1691222 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1691222' 00:18:57.906 Process raid pid: 1691222 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1691222 /var/tmp/spdk-raid.sock 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1691222 ']' 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:57.906 19:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.906 [2024-06-10 19:03:12.485293] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:18:57.906 [2024-06-10 19:03:12.485349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.0 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.1 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.2 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.3 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.4 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.5 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.6 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:01.7 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:02.0 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:02.1 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:02.2 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.906 EAL: Requested device 0000:b6:02.3 cannot be used 00:18:57.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b6:02.4 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b6:02.5 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b6:02.6 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b6:02.7 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.0 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.1 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.2 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.3 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.4 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.5 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.6 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:01.7 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.0 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.1 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.2 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.3 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.4 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.5 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.6 cannot be used 00:18:57.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:57.907 EAL: Requested device 0000:b8:02.7 cannot be used 00:18:57.907 [2024-06-10 19:03:12.618952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.166 [2024-06-10 19:03:12.705072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.166 [2024-06-10 19:03:12.759504] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.166 [2024-06-10 19:03:12.759529] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.736 19:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:58.736 19:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:18:58.736 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:58.996 [2024-06-10 19:03:13.593169] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.996 [2024-06-10 19:03:13.593212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.996 [2024-06-10 19:03:13.593222] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.996 [2024-06-10 19:03:13.593233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.996 [2024-06-10 19:03:13.593241] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.996 [2024-06-10 19:03:13.593252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.996 [2024-06-10 19:03:13.593260] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:58.996 [2024-06-10 19:03:13.593275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.996 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.256 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.256 "name": "Existed_Raid", 00:18:59.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.256 "strip_size_kb": 64, 00:18:59.256 "state": "configuring", 00:18:59.256 "raid_level": "raid0", 00:18:59.256 "superblock": false, 00:18:59.256 "num_base_bdevs": 4, 00:18:59.256 "num_base_bdevs_discovered": 0, 00:18:59.256 "num_base_bdevs_operational": 4, 00:18:59.256 "base_bdevs_list": [ 00:18:59.256 { 00:18:59.256 "name": "BaseBdev1", 00:18:59.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.256 "is_configured": false, 00:18:59.256 "data_offset": 0, 00:18:59.256 "data_size": 0 00:18:59.256 }, 00:18:59.256 { 00:18:59.256 "name": "BaseBdev2", 00:18:59.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.257 "is_configured": false, 00:18:59.257 "data_offset": 0, 00:18:59.257 "data_size": 0 00:18:59.257 }, 00:18:59.257 { 00:18:59.257 "name": "BaseBdev3", 00:18:59.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.257 "is_configured": false, 00:18:59.257 "data_offset": 0, 00:18:59.257 "data_size": 0 00:18:59.257 }, 00:18:59.257 { 00:18:59.257 "name": "BaseBdev4", 00:18:59.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.257 "is_configured": false, 00:18:59.257 "data_offset": 0, 00:18:59.257 "data_size": 0 00:18:59.257 } 00:18:59.257 ] 00:18:59.257 }' 00:18:59.257 19:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.257 19:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.825 19:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:00.084 [2024-06-10 19:03:14.631774] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.084 [2024-06-10 19:03:14.631803] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb06f50 name Existed_Raid, state configuring 00:19:00.084 19:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.344 [2024-06-10 19:03:14.860384] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.344 [2024-06-10 19:03:14.860411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.344 [2024-06-10 19:03:14.860419] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.344 [2024-06-10 19:03:14.860430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.344 [2024-06-10 19:03:14.860438] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.344 [2024-06-10 19:03:14.860449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.344 [2024-06-10 19:03:14.860457] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.344 [2024-06-10 19:03:14.860467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.344 19:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:00.344 [2024-06-10 19:03:15.098442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.344 BaseBdev1 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.604 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.863 [ 00:19:00.863 { 00:19:00.863 "name": "BaseBdev1", 00:19:00.863 "aliases": [ 00:19:00.863 "2a845833-c85e-41c5-bcf3-2609036a7c3c" 00:19:00.863 ], 00:19:00.863 "product_name": "Malloc disk", 00:19:00.863 "block_size": 512, 00:19:00.863 "num_blocks": 65536, 00:19:00.863 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:00.863 "assigned_rate_limits": { 00:19:00.863 "rw_ios_per_sec": 0, 00:19:00.863 "rw_mbytes_per_sec": 0, 00:19:00.863 "r_mbytes_per_sec": 0, 00:19:00.863 "w_mbytes_per_sec": 0 00:19:00.863 }, 00:19:00.863 "claimed": true, 00:19:00.863 "claim_type": "exclusive_write", 00:19:00.863 "zoned": false, 00:19:00.863 "supported_io_types": { 00:19:00.863 "read": true, 00:19:00.863 "write": true, 00:19:00.863 "unmap": true, 00:19:00.863 "write_zeroes": true, 00:19:00.863 "flush": true, 00:19:00.863 "reset": true, 00:19:00.863 "compare": false, 00:19:00.863 "compare_and_write": false, 00:19:00.863 "abort": true, 00:19:00.863 "nvme_admin": false, 00:19:00.863 "nvme_io": false 00:19:00.863 }, 00:19:00.863 "memory_domains": [ 00:19:00.863 { 00:19:00.863 "dma_device_id": "system", 00:19:00.863 "dma_device_type": 1 00:19:00.863 }, 00:19:00.863 { 00:19:00.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.863 "dma_device_type": 2 00:19:00.863 } 00:19:00.863 ], 00:19:00.863 "driver_specific": {} 00:19:00.863 } 00:19:00.863 ] 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.863 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.864 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.864 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.864 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.864 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.124 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.124 "name": "Existed_Raid", 00:19:01.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.124 "strip_size_kb": 64, 00:19:01.124 "state": "configuring", 00:19:01.124 "raid_level": "raid0", 00:19:01.124 "superblock": false, 00:19:01.124 "num_base_bdevs": 4, 00:19:01.124 "num_base_bdevs_discovered": 1, 00:19:01.124 "num_base_bdevs_operational": 4, 00:19:01.124 "base_bdevs_list": [ 00:19:01.124 { 00:19:01.124 "name": "BaseBdev1", 00:19:01.124 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:01.124 "is_configured": true, 00:19:01.124 "data_offset": 0, 00:19:01.124 "data_size": 65536 00:19:01.124 }, 00:19:01.124 { 00:19:01.124 "name": "BaseBdev2", 00:19:01.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.124 "is_configured": false, 00:19:01.124 "data_offset": 0, 00:19:01.124 "data_size": 0 00:19:01.124 }, 00:19:01.124 { 00:19:01.124 "name": "BaseBdev3", 00:19:01.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.124 "is_configured": false, 00:19:01.124 "data_offset": 0, 00:19:01.124 "data_size": 0 00:19:01.124 }, 00:19:01.124 { 00:19:01.124 "name": "BaseBdev4", 00:19:01.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.124 "is_configured": false, 00:19:01.124 "data_offset": 0, 00:19:01.124 "data_size": 0 00:19:01.124 } 00:19:01.124 ] 00:19:01.124 }' 00:19:01.124 19:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.124 19:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.952 [2024-06-10 19:03:16.582346] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.952 [2024-06-10 19:03:16.582380] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb067c0 name Existed_Raid, state configuring 00:19:01.952 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:02.211 [2024-06-10 19:03:16.810982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.211 [2024-06-10 19:03:16.812328] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:02.211 [2024-06-10 19:03:16.812362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:02.211 [2024-06-10 19:03:16.812372] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:02.211 [2024-06-10 19:03:16.812383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:02.211 [2024-06-10 19:03:16.812392] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:02.211 [2024-06-10 19:03:16.812403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:02.211 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.212 19:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.471 19:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.471 "name": "Existed_Raid", 00:19:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.471 "strip_size_kb": 64, 00:19:02.471 "state": "configuring", 00:19:02.471 "raid_level": "raid0", 00:19:02.471 "superblock": false, 00:19:02.471 "num_base_bdevs": 4, 00:19:02.471 "num_base_bdevs_discovered": 1, 00:19:02.471 "num_base_bdevs_operational": 4, 00:19:02.471 "base_bdevs_list": [ 00:19:02.471 { 00:19:02.471 "name": "BaseBdev1", 00:19:02.471 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:02.471 "is_configured": true, 00:19:02.471 "data_offset": 0, 00:19:02.471 "data_size": 65536 00:19:02.471 }, 00:19:02.471 { 00:19:02.471 "name": "BaseBdev2", 00:19:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.471 "is_configured": false, 00:19:02.471 "data_offset": 0, 00:19:02.471 "data_size": 0 00:19:02.471 }, 00:19:02.471 { 00:19:02.471 "name": "BaseBdev3", 00:19:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.471 "is_configured": false, 00:19:02.471 "data_offset": 0, 00:19:02.471 "data_size": 0 00:19:02.471 }, 00:19:02.471 { 00:19:02.471 "name": "BaseBdev4", 00:19:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.471 "is_configured": false, 00:19:02.471 "data_offset": 0, 00:19:02.471 "data_size": 0 00:19:02.471 } 00:19:02.471 ] 00:19:02.471 }' 00:19:02.471 19:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.471 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.040 19:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:03.299 [2024-06-10 19:03:17.825068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.299 BaseBdev2 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:03.299 19:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.299 19:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:03.558 [ 00:19:03.558 { 00:19:03.559 "name": "BaseBdev2", 00:19:03.559 "aliases": [ 00:19:03.559 "d6cf3c1c-ace0-436e-ab67-f9582507d5bf" 00:19:03.559 ], 00:19:03.559 "product_name": "Malloc disk", 00:19:03.559 "block_size": 512, 00:19:03.559 "num_blocks": 65536, 00:19:03.559 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:03.559 "assigned_rate_limits": { 00:19:03.559 "rw_ios_per_sec": 0, 00:19:03.559 "rw_mbytes_per_sec": 0, 00:19:03.559 "r_mbytes_per_sec": 0, 00:19:03.559 "w_mbytes_per_sec": 0 00:19:03.559 }, 00:19:03.559 "claimed": true, 00:19:03.559 "claim_type": "exclusive_write", 00:19:03.559 "zoned": false, 00:19:03.559 "supported_io_types": { 00:19:03.559 "read": true, 00:19:03.559 "write": true, 00:19:03.559 "unmap": true, 00:19:03.559 "write_zeroes": true, 00:19:03.559 "flush": true, 00:19:03.559 "reset": true, 00:19:03.559 "compare": false, 00:19:03.559 "compare_and_write": false, 00:19:03.559 "abort": true, 00:19:03.559 "nvme_admin": false, 00:19:03.559 "nvme_io": false 00:19:03.559 }, 00:19:03.559 "memory_domains": [ 00:19:03.559 { 00:19:03.559 "dma_device_id": "system", 00:19:03.559 "dma_device_type": 1 00:19:03.559 }, 00:19:03.559 { 00:19:03.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.559 "dma_device_type": 2 00:19:03.559 } 00:19:03.559 ], 00:19:03.559 "driver_specific": {} 00:19:03.559 } 00:19:03.559 ] 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.559 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.818 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.818 "name": "Existed_Raid", 00:19:03.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.818 "strip_size_kb": 64, 00:19:03.818 "state": "configuring", 00:19:03.818 "raid_level": "raid0", 00:19:03.818 "superblock": false, 00:19:03.818 "num_base_bdevs": 4, 00:19:03.818 "num_base_bdevs_discovered": 2, 00:19:03.818 "num_base_bdevs_operational": 4, 00:19:03.818 "base_bdevs_list": [ 00:19:03.818 { 00:19:03.818 "name": "BaseBdev1", 00:19:03.818 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:03.818 "is_configured": true, 00:19:03.818 "data_offset": 0, 00:19:03.818 "data_size": 65536 00:19:03.818 }, 00:19:03.818 { 00:19:03.818 "name": "BaseBdev2", 00:19:03.818 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:03.818 "is_configured": true, 00:19:03.818 "data_offset": 0, 00:19:03.818 "data_size": 65536 00:19:03.818 }, 00:19:03.818 { 00:19:03.818 "name": "BaseBdev3", 00:19:03.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.818 "is_configured": false, 00:19:03.818 "data_offset": 0, 00:19:03.818 "data_size": 0 00:19:03.818 }, 00:19:03.818 { 00:19:03.818 "name": "BaseBdev4", 00:19:03.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.818 "is_configured": false, 00:19:03.818 "data_offset": 0, 00:19:03.818 "data_size": 0 00:19:03.818 } 00:19:03.818 ] 00:19:03.818 }' 00:19:03.818 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.818 19:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.387 19:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:04.387 [2024-06-10 19:03:19.143736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.646 BaseBdev3 00:19:04.646 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:04.646 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:04.646 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:04.646 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:04.647 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:04.647 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:04.647 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.647 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:04.905 [ 00:19:04.905 { 00:19:04.905 "name": "BaseBdev3", 00:19:04.905 "aliases": [ 00:19:04.905 "07ac3b09-774f-485b-8d4c-ca69d77c5c31" 00:19:04.905 ], 00:19:04.905 "product_name": "Malloc disk", 00:19:04.905 "block_size": 512, 00:19:04.905 "num_blocks": 65536, 00:19:04.905 "uuid": "07ac3b09-774f-485b-8d4c-ca69d77c5c31", 00:19:04.905 "assigned_rate_limits": { 00:19:04.905 "rw_ios_per_sec": 0, 00:19:04.905 "rw_mbytes_per_sec": 0, 00:19:04.905 "r_mbytes_per_sec": 0, 00:19:04.905 "w_mbytes_per_sec": 0 00:19:04.905 }, 00:19:04.905 "claimed": true, 00:19:04.905 "claim_type": "exclusive_write", 00:19:04.905 "zoned": false, 00:19:04.905 "supported_io_types": { 00:19:04.905 "read": true, 00:19:04.905 "write": true, 00:19:04.905 "unmap": true, 00:19:04.905 "write_zeroes": true, 00:19:04.905 "flush": true, 00:19:04.905 "reset": true, 00:19:04.905 "compare": false, 00:19:04.905 "compare_and_write": false, 00:19:04.905 "abort": true, 00:19:04.905 "nvme_admin": false, 00:19:04.905 "nvme_io": false 00:19:04.905 }, 00:19:04.905 "memory_domains": [ 00:19:04.905 { 00:19:04.905 "dma_device_id": "system", 00:19:04.905 "dma_device_type": 1 00:19:04.905 }, 00:19:04.905 { 00:19:04.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.905 "dma_device_type": 2 00:19:04.905 } 00:19:04.905 ], 00:19:04.905 "driver_specific": {} 00:19:04.905 } 00:19:04.905 ] 00:19:04.905 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:04.905 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:04.905 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.905 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:04.905 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.906 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.165 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.165 "name": "Existed_Raid", 00:19:05.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.165 "strip_size_kb": 64, 00:19:05.165 "state": "configuring", 00:19:05.165 "raid_level": "raid0", 00:19:05.165 "superblock": false, 00:19:05.165 "num_base_bdevs": 4, 00:19:05.165 "num_base_bdevs_discovered": 3, 00:19:05.165 "num_base_bdevs_operational": 4, 00:19:05.165 "base_bdevs_list": [ 00:19:05.165 { 00:19:05.165 "name": "BaseBdev1", 00:19:05.165 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:05.165 "is_configured": true, 00:19:05.165 "data_offset": 0, 00:19:05.165 "data_size": 65536 00:19:05.165 }, 00:19:05.165 { 00:19:05.165 "name": "BaseBdev2", 00:19:05.165 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:05.165 "is_configured": true, 00:19:05.165 "data_offset": 0, 00:19:05.165 "data_size": 65536 00:19:05.165 }, 00:19:05.165 { 00:19:05.165 "name": "BaseBdev3", 00:19:05.165 "uuid": "07ac3b09-774f-485b-8d4c-ca69d77c5c31", 00:19:05.165 "is_configured": true, 00:19:05.165 "data_offset": 0, 00:19:05.165 "data_size": 65536 00:19:05.165 }, 00:19:05.165 { 00:19:05.165 "name": "BaseBdev4", 00:19:05.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.165 "is_configured": false, 00:19:05.165 "data_offset": 0, 00:19:05.165 "data_size": 0 00:19:05.165 } 00:19:05.165 ] 00:19:05.165 }' 00:19:05.165 19:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.165 19:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.731 19:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:05.990 [2024-06-10 19:03:20.642818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:05.991 [2024-06-10 19:03:20.642851] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb07820 00:19:05.991 [2024-06-10 19:03:20.642859] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:05.991 [2024-06-10 19:03:20.643094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0ca50 00:19:05.991 [2024-06-10 19:03:20.643206] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb07820 00:19:05.991 [2024-06-10 19:03:20.643215] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb07820 00:19:05.991 [2024-06-10 19:03:20.643367] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.991 BaseBdev4 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:05.991 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.250 19:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:06.510 [ 00:19:06.510 { 00:19:06.510 "name": "BaseBdev4", 00:19:06.510 "aliases": [ 00:19:06.510 "558cb470-be60-439a-a772-f121d88fd00f" 00:19:06.510 ], 00:19:06.510 "product_name": "Malloc disk", 00:19:06.510 "block_size": 512, 00:19:06.510 "num_blocks": 65536, 00:19:06.510 "uuid": "558cb470-be60-439a-a772-f121d88fd00f", 00:19:06.510 "assigned_rate_limits": { 00:19:06.510 "rw_ios_per_sec": 0, 00:19:06.510 "rw_mbytes_per_sec": 0, 00:19:06.510 "r_mbytes_per_sec": 0, 00:19:06.510 "w_mbytes_per_sec": 0 00:19:06.510 }, 00:19:06.510 "claimed": true, 00:19:06.510 "claim_type": "exclusive_write", 00:19:06.510 "zoned": false, 00:19:06.510 "supported_io_types": { 00:19:06.510 "read": true, 00:19:06.510 "write": true, 00:19:06.510 "unmap": true, 00:19:06.510 "write_zeroes": true, 00:19:06.510 "flush": true, 00:19:06.510 "reset": true, 00:19:06.510 "compare": false, 00:19:06.510 "compare_and_write": false, 00:19:06.510 "abort": true, 00:19:06.510 "nvme_admin": false, 00:19:06.510 "nvme_io": false 00:19:06.510 }, 00:19:06.510 "memory_domains": [ 00:19:06.510 { 00:19:06.510 "dma_device_id": "system", 00:19:06.510 "dma_device_type": 1 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.510 "dma_device_type": 2 00:19:06.510 } 00:19:06.510 ], 00:19:06.510 "driver_specific": {} 00:19:06.510 } 00:19:06.510 ] 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.510 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.769 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.770 "name": "Existed_Raid", 00:19:06.770 "uuid": "1e3d8d91-e400-47e3-b545-df58369c69da", 00:19:06.770 "strip_size_kb": 64, 00:19:06.770 "state": "online", 00:19:06.770 "raid_level": "raid0", 00:19:06.770 "superblock": false, 00:19:06.770 "num_base_bdevs": 4, 00:19:06.770 "num_base_bdevs_discovered": 4, 00:19:06.770 "num_base_bdevs_operational": 4, 00:19:06.770 "base_bdevs_list": [ 00:19:06.770 { 00:19:06.770 "name": "BaseBdev1", 00:19:06.770 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:06.770 "is_configured": true, 00:19:06.770 "data_offset": 0, 00:19:06.770 "data_size": 65536 00:19:06.770 }, 00:19:06.770 { 00:19:06.770 "name": "BaseBdev2", 00:19:06.770 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:06.770 "is_configured": true, 00:19:06.770 "data_offset": 0, 00:19:06.770 "data_size": 65536 00:19:06.770 }, 00:19:06.770 { 00:19:06.770 "name": "BaseBdev3", 00:19:06.770 "uuid": "07ac3b09-774f-485b-8d4c-ca69d77c5c31", 00:19:06.770 "is_configured": true, 00:19:06.770 "data_offset": 0, 00:19:06.770 "data_size": 65536 00:19:06.770 }, 00:19:06.770 { 00:19:06.770 "name": "BaseBdev4", 00:19:06.770 "uuid": "558cb470-be60-439a-a772-f121d88fd00f", 00:19:06.770 "is_configured": true, 00:19:06.770 "data_offset": 0, 00:19:06.770 "data_size": 65536 00:19:06.770 } 00:19:06.770 ] 00:19:06.770 }' 00:19:06.770 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.770 19:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:07.338 19:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:07.338 [2024-06-10 19:03:22.082887] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.598 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:07.598 "name": "Existed_Raid", 00:19:07.598 "aliases": [ 00:19:07.598 "1e3d8d91-e400-47e3-b545-df58369c69da" 00:19:07.598 ], 00:19:07.598 "product_name": "Raid Volume", 00:19:07.598 "block_size": 512, 00:19:07.598 "num_blocks": 262144, 00:19:07.598 "uuid": "1e3d8d91-e400-47e3-b545-df58369c69da", 00:19:07.598 "assigned_rate_limits": { 00:19:07.598 "rw_ios_per_sec": 0, 00:19:07.598 "rw_mbytes_per_sec": 0, 00:19:07.598 "r_mbytes_per_sec": 0, 00:19:07.598 "w_mbytes_per_sec": 0 00:19:07.598 }, 00:19:07.598 "claimed": false, 00:19:07.598 "zoned": false, 00:19:07.598 "supported_io_types": { 00:19:07.598 "read": true, 00:19:07.598 "write": true, 00:19:07.598 "unmap": true, 00:19:07.598 "write_zeroes": true, 00:19:07.598 "flush": true, 00:19:07.598 "reset": true, 00:19:07.598 "compare": false, 00:19:07.598 "compare_and_write": false, 00:19:07.598 "abort": false, 00:19:07.598 "nvme_admin": false, 00:19:07.598 "nvme_io": false 00:19:07.598 }, 00:19:07.598 "memory_domains": [ 00:19:07.598 { 00:19:07.598 "dma_device_id": "system", 00:19:07.598 "dma_device_type": 1 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.598 "dma_device_type": 2 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "system", 00:19:07.598 "dma_device_type": 1 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.598 "dma_device_type": 2 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "system", 00:19:07.598 "dma_device_type": 1 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.598 "dma_device_type": 2 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "system", 00:19:07.598 "dma_device_type": 1 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.598 "dma_device_type": 2 00:19:07.598 } 00:19:07.598 ], 00:19:07.598 "driver_specific": { 00:19:07.598 "raid": { 00:19:07.598 "uuid": "1e3d8d91-e400-47e3-b545-df58369c69da", 00:19:07.598 "strip_size_kb": 64, 00:19:07.598 "state": "online", 00:19:07.598 "raid_level": "raid0", 00:19:07.598 "superblock": false, 00:19:07.598 "num_base_bdevs": 4, 00:19:07.598 "num_base_bdevs_discovered": 4, 00:19:07.598 "num_base_bdevs_operational": 4, 00:19:07.598 "base_bdevs_list": [ 00:19:07.598 { 00:19:07.598 "name": "BaseBdev1", 00:19:07.598 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:07.598 "is_configured": true, 00:19:07.598 "data_offset": 0, 00:19:07.598 "data_size": 65536 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "name": "BaseBdev2", 00:19:07.598 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:07.598 "is_configured": true, 00:19:07.598 "data_offset": 0, 00:19:07.598 "data_size": 65536 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "name": "BaseBdev3", 00:19:07.598 "uuid": "07ac3b09-774f-485b-8d4c-ca69d77c5c31", 00:19:07.598 "is_configured": true, 00:19:07.598 "data_offset": 0, 00:19:07.598 "data_size": 65536 00:19:07.598 }, 00:19:07.598 { 00:19:07.598 "name": "BaseBdev4", 00:19:07.598 "uuid": "558cb470-be60-439a-a772-f121d88fd00f", 00:19:07.598 "is_configured": true, 00:19:07.598 "data_offset": 0, 00:19:07.598 "data_size": 65536 00:19:07.598 } 00:19:07.598 ] 00:19:07.598 } 00:19:07.598 } 00:19:07.598 }' 00:19:07.598 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.598 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:07.598 BaseBdev2 00:19:07.598 BaseBdev3 00:19:07.598 BaseBdev4' 00:19:07.598 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.598 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:07.598 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.857 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.857 "name": "BaseBdev1", 00:19:07.857 "aliases": [ 00:19:07.857 "2a845833-c85e-41c5-bcf3-2609036a7c3c" 00:19:07.857 ], 00:19:07.857 "product_name": "Malloc disk", 00:19:07.857 "block_size": 512, 00:19:07.857 "num_blocks": 65536, 00:19:07.857 "uuid": "2a845833-c85e-41c5-bcf3-2609036a7c3c", 00:19:07.857 "assigned_rate_limits": { 00:19:07.857 "rw_ios_per_sec": 0, 00:19:07.857 "rw_mbytes_per_sec": 0, 00:19:07.857 "r_mbytes_per_sec": 0, 00:19:07.857 "w_mbytes_per_sec": 0 00:19:07.857 }, 00:19:07.857 "claimed": true, 00:19:07.857 "claim_type": "exclusive_write", 00:19:07.857 "zoned": false, 00:19:07.857 "supported_io_types": { 00:19:07.857 "read": true, 00:19:07.857 "write": true, 00:19:07.857 "unmap": true, 00:19:07.857 "write_zeroes": true, 00:19:07.857 "flush": true, 00:19:07.857 "reset": true, 00:19:07.857 "compare": false, 00:19:07.857 "compare_and_write": false, 00:19:07.857 "abort": true, 00:19:07.857 "nvme_admin": false, 00:19:07.857 "nvme_io": false 00:19:07.857 }, 00:19:07.857 "memory_domains": [ 00:19:07.857 { 00:19:07.857 "dma_device_id": "system", 00:19:07.857 "dma_device_type": 1 00:19:07.857 }, 00:19:07.857 { 00:19:07.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.857 "dma_device_type": 2 00:19:07.857 } 00:19:07.857 ], 00:19:07.857 "driver_specific": {} 00:19:07.857 }' 00:19:07.857 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.857 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.857 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.858 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.117 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.117 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.117 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.117 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.117 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:08.376 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.376 "name": "BaseBdev2", 00:19:08.376 "aliases": [ 00:19:08.376 "d6cf3c1c-ace0-436e-ab67-f9582507d5bf" 00:19:08.376 ], 00:19:08.376 "product_name": "Malloc disk", 00:19:08.376 "block_size": 512, 00:19:08.376 "num_blocks": 65536, 00:19:08.376 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:08.376 "assigned_rate_limits": { 00:19:08.376 "rw_ios_per_sec": 0, 00:19:08.376 "rw_mbytes_per_sec": 0, 00:19:08.376 "r_mbytes_per_sec": 0, 00:19:08.376 "w_mbytes_per_sec": 0 00:19:08.376 }, 00:19:08.376 "claimed": true, 00:19:08.376 "claim_type": "exclusive_write", 00:19:08.376 "zoned": false, 00:19:08.376 "supported_io_types": { 00:19:08.376 "read": true, 00:19:08.376 "write": true, 00:19:08.376 "unmap": true, 00:19:08.376 "write_zeroes": true, 00:19:08.376 "flush": true, 00:19:08.376 "reset": true, 00:19:08.376 "compare": false, 00:19:08.376 "compare_and_write": false, 00:19:08.376 "abort": true, 00:19:08.376 "nvme_admin": false, 00:19:08.376 "nvme_io": false 00:19:08.376 }, 00:19:08.376 "memory_domains": [ 00:19:08.376 { 00:19:08.376 "dma_device_id": "system", 00:19:08.376 "dma_device_type": 1 00:19:08.376 }, 00:19:08.376 { 00:19:08.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.376 "dma_device_type": 2 00:19:08.376 } 00:19:08.376 ], 00:19:08.376 "driver_specific": {} 00:19:08.376 }' 00:19:08.376 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.376 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.376 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.376 19:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.376 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.376 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.376 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.376 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.376 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.376 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.636 "name": "BaseBdev3", 00:19:08.636 "aliases": [ 00:19:08.636 "07ac3b09-774f-485b-8d4c-ca69d77c5c31" 00:19:08.636 ], 00:19:08.636 "product_name": "Malloc disk", 00:19:08.636 "block_size": 512, 00:19:08.636 "num_blocks": 65536, 00:19:08.636 "uuid": "07ac3b09-774f-485b-8d4c-ca69d77c5c31", 00:19:08.636 "assigned_rate_limits": { 00:19:08.636 "rw_ios_per_sec": 0, 00:19:08.636 "rw_mbytes_per_sec": 0, 00:19:08.636 "r_mbytes_per_sec": 0, 00:19:08.636 "w_mbytes_per_sec": 0 00:19:08.636 }, 00:19:08.636 "claimed": true, 00:19:08.636 "claim_type": "exclusive_write", 00:19:08.636 "zoned": false, 00:19:08.636 "supported_io_types": { 00:19:08.636 "read": true, 00:19:08.636 "write": true, 00:19:08.636 "unmap": true, 00:19:08.636 "write_zeroes": true, 00:19:08.636 "flush": true, 00:19:08.636 "reset": true, 00:19:08.636 "compare": false, 00:19:08.636 "compare_and_write": false, 00:19:08.636 "abort": true, 00:19:08.636 "nvme_admin": false, 00:19:08.636 "nvme_io": false 00:19:08.636 }, 00:19:08.636 "memory_domains": [ 00:19:08.636 { 00:19:08.636 "dma_device_id": "system", 00:19:08.636 "dma_device_type": 1 00:19:08.636 }, 00:19:08.636 { 00:19:08.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.636 "dma_device_type": 2 00:19:08.636 } 00:19:08.636 ], 00:19:08.636 "driver_specific": {} 00:19:08.636 }' 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.636 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:08.895 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:09.154 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:09.154 "name": "BaseBdev4", 00:19:09.154 "aliases": [ 00:19:09.154 "558cb470-be60-439a-a772-f121d88fd00f" 00:19:09.154 ], 00:19:09.154 "product_name": "Malloc disk", 00:19:09.154 "block_size": 512, 00:19:09.154 "num_blocks": 65536, 00:19:09.154 "uuid": "558cb470-be60-439a-a772-f121d88fd00f", 00:19:09.154 "assigned_rate_limits": { 00:19:09.154 "rw_ios_per_sec": 0, 00:19:09.154 "rw_mbytes_per_sec": 0, 00:19:09.154 "r_mbytes_per_sec": 0, 00:19:09.154 "w_mbytes_per_sec": 0 00:19:09.154 }, 00:19:09.154 "claimed": true, 00:19:09.154 "claim_type": "exclusive_write", 00:19:09.154 "zoned": false, 00:19:09.154 "supported_io_types": { 00:19:09.154 "read": true, 00:19:09.154 "write": true, 00:19:09.154 "unmap": true, 00:19:09.154 "write_zeroes": true, 00:19:09.154 "flush": true, 00:19:09.154 "reset": true, 00:19:09.154 "compare": false, 00:19:09.154 "compare_and_write": false, 00:19:09.154 "abort": true, 00:19:09.154 "nvme_admin": false, 00:19:09.154 "nvme_io": false 00:19:09.154 }, 00:19:09.154 "memory_domains": [ 00:19:09.154 { 00:19:09.154 "dma_device_id": "system", 00:19:09.154 "dma_device_type": 1 00:19:09.154 }, 00:19:09.154 { 00:19:09.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.154 "dma_device_type": 2 00:19:09.154 } 00:19:09.154 ], 00:19:09.154 "driver_specific": {} 00:19:09.154 }' 00:19:09.154 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.154 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:09.415 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:09.415 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.415 19:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:09.415 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:09.688 [2024-06-10 19:03:24.340790] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.688 [2024-06-10 19:03:24.340815] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.688 [2024-06-10 19:03:24.340858] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.688 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.999 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.999 "name": "Existed_Raid", 00:19:09.999 "uuid": "1e3d8d91-e400-47e3-b545-df58369c69da", 00:19:09.999 "strip_size_kb": 64, 00:19:09.999 "state": "offline", 00:19:09.999 "raid_level": "raid0", 00:19:09.999 "superblock": false, 00:19:09.999 "num_base_bdevs": 4, 00:19:09.999 "num_base_bdevs_discovered": 3, 00:19:09.999 "num_base_bdevs_operational": 3, 00:19:09.999 "base_bdevs_list": [ 00:19:09.999 { 00:19:09.999 "name": null, 00:19:09.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.999 "is_configured": false, 00:19:09.999 "data_offset": 0, 00:19:09.999 "data_size": 65536 00:19:09.999 }, 00:19:09.999 { 00:19:09.999 "name": "BaseBdev2", 00:19:09.999 "uuid": "d6cf3c1c-ace0-436e-ab67-f9582507d5bf", 00:19:09.999 "is_configured": true, 00:19:09.999 "data_offset": 0, 00:19:09.999 "data_size": 65536 00:19:09.999 }, 00:19:09.999 { 00:19:09.999 "name": "BaseBdev3", 00:19:09.999 "uuid": "07ac3b09-774f-485b-8d4c-ca69d77c5c31", 00:19:09.999 "is_configured": true, 00:19:09.999 "data_offset": 0, 00:19:09.999 "data_size": 65536 00:19:09.999 }, 00:19:09.999 { 00:19:09.999 "name": "BaseBdev4", 00:19:09.999 "uuid": "558cb470-be60-439a-a772-f121d88fd00f", 00:19:09.999 "is_configured": true, 00:19:09.999 "data_offset": 0, 00:19:09.999 "data_size": 65536 00:19:09.999 } 00:19:09.999 ] 00:19:09.999 }' 00:19:09.999 19:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.999 19:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.568 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:10.568 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.568 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.568 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:10.827 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:10.827 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.827 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:10.827 [2024-06-10 19:03:25.561110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.086 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.087 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.087 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.087 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:11.087 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:11.087 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.087 19:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:11.346 [2024-06-10 19:03:26.012222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:11.346 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.346 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.346 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.346 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:11.605 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:11.605 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.605 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:11.864 [2024-06-10 19:03:26.471453] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:11.865 [2024-06-10 19:03:26.471491] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb07820 name Existed_Raid, state offline 00:19:11.865 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:11.865 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:11.865 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.865 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:12.124 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:12.124 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:12.124 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:12.124 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:12.124 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.124 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:12.383 BaseBdev2 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:12.383 19:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.642 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:12.642 [ 00:19:12.642 { 00:19:12.642 "name": "BaseBdev2", 00:19:12.642 "aliases": [ 00:19:12.642 "e4df5224-8cb6-4c9b-b7c6-96e2d33af812" 00:19:12.642 ], 00:19:12.642 "product_name": "Malloc disk", 00:19:12.642 "block_size": 512, 00:19:12.642 "num_blocks": 65536, 00:19:12.642 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:12.642 "assigned_rate_limits": { 00:19:12.642 "rw_ios_per_sec": 0, 00:19:12.642 "rw_mbytes_per_sec": 0, 00:19:12.642 "r_mbytes_per_sec": 0, 00:19:12.642 "w_mbytes_per_sec": 0 00:19:12.642 }, 00:19:12.642 "claimed": false, 00:19:12.642 "zoned": false, 00:19:12.642 "supported_io_types": { 00:19:12.642 "read": true, 00:19:12.642 "write": true, 00:19:12.642 "unmap": true, 00:19:12.642 "write_zeroes": true, 00:19:12.642 "flush": true, 00:19:12.642 "reset": true, 00:19:12.642 "compare": false, 00:19:12.642 "compare_and_write": false, 00:19:12.642 "abort": true, 00:19:12.642 "nvme_admin": false, 00:19:12.642 "nvme_io": false 00:19:12.642 }, 00:19:12.643 "memory_domains": [ 00:19:12.643 { 00:19:12.643 "dma_device_id": "system", 00:19:12.643 "dma_device_type": 1 00:19:12.643 }, 00:19:12.643 { 00:19:12.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.643 "dma_device_type": 2 00:19:12.643 } 00:19:12.643 ], 00:19:12.643 "driver_specific": {} 00:19:12.643 } 00:19:12.643 ] 00:19:12.643 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:12.643 19:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:12.643 19:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.643 19:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:12.902 BaseBdev3 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:12.902 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.161 19:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:13.421 [ 00:19:13.421 { 00:19:13.421 "name": "BaseBdev3", 00:19:13.421 "aliases": [ 00:19:13.421 "71a7a7df-7ce3-497c-85e1-9c12028c0d68" 00:19:13.421 ], 00:19:13.421 "product_name": "Malloc disk", 00:19:13.421 "block_size": 512, 00:19:13.421 "num_blocks": 65536, 00:19:13.421 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:13.421 "assigned_rate_limits": { 00:19:13.421 "rw_ios_per_sec": 0, 00:19:13.421 "rw_mbytes_per_sec": 0, 00:19:13.421 "r_mbytes_per_sec": 0, 00:19:13.421 "w_mbytes_per_sec": 0 00:19:13.421 }, 00:19:13.421 "claimed": false, 00:19:13.421 "zoned": false, 00:19:13.421 "supported_io_types": { 00:19:13.421 "read": true, 00:19:13.421 "write": true, 00:19:13.421 "unmap": true, 00:19:13.421 "write_zeroes": true, 00:19:13.421 "flush": true, 00:19:13.421 "reset": true, 00:19:13.421 "compare": false, 00:19:13.421 "compare_and_write": false, 00:19:13.421 "abort": true, 00:19:13.421 "nvme_admin": false, 00:19:13.421 "nvme_io": false 00:19:13.421 }, 00:19:13.421 "memory_domains": [ 00:19:13.421 { 00:19:13.421 "dma_device_id": "system", 00:19:13.421 "dma_device_type": 1 00:19:13.421 }, 00:19:13.421 { 00:19:13.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.421 "dma_device_type": 2 00:19:13.421 } 00:19:13.421 ], 00:19:13.421 "driver_specific": {} 00:19:13.421 } 00:19:13.421 ] 00:19:13.421 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:13.421 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:13.421 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:13.421 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:13.681 BaseBdev4 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:13.681 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.940 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:14.199 [ 00:19:14.199 { 00:19:14.199 "name": "BaseBdev4", 00:19:14.199 "aliases": [ 00:19:14.199 "d3ed02dd-24a9-4858-8be4-c89a17b05873" 00:19:14.199 ], 00:19:14.199 "product_name": "Malloc disk", 00:19:14.199 "block_size": 512, 00:19:14.199 "num_blocks": 65536, 00:19:14.199 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:14.199 "assigned_rate_limits": { 00:19:14.199 "rw_ios_per_sec": 0, 00:19:14.199 "rw_mbytes_per_sec": 0, 00:19:14.199 "r_mbytes_per_sec": 0, 00:19:14.200 "w_mbytes_per_sec": 0 00:19:14.200 }, 00:19:14.200 "claimed": false, 00:19:14.200 "zoned": false, 00:19:14.200 "supported_io_types": { 00:19:14.200 "read": true, 00:19:14.200 "write": true, 00:19:14.200 "unmap": true, 00:19:14.200 "write_zeroes": true, 00:19:14.200 "flush": true, 00:19:14.200 "reset": true, 00:19:14.200 "compare": false, 00:19:14.200 "compare_and_write": false, 00:19:14.200 "abort": true, 00:19:14.200 "nvme_admin": false, 00:19:14.200 "nvme_io": false 00:19:14.200 }, 00:19:14.200 "memory_domains": [ 00:19:14.200 { 00:19:14.200 "dma_device_id": "system", 00:19:14.200 "dma_device_type": 1 00:19:14.200 }, 00:19:14.200 { 00:19:14.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.200 "dma_device_type": 2 00:19:14.200 } 00:19:14.200 ], 00:19:14.200 "driver_specific": {} 00:19:14.200 } 00:19:14.200 ] 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.200 [2024-06-10 19:03:28.936850] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.200 [2024-06-10 19:03:28.936890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.200 [2024-06-10 19:03:28.936906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.200 [2024-06-10 19:03:28.938116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:14.200 [2024-06-10 19:03:28.938154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.200 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.458 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.458 19:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.458 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.458 "name": "Existed_Raid", 00:19:14.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.458 "strip_size_kb": 64, 00:19:14.458 "state": "configuring", 00:19:14.458 "raid_level": "raid0", 00:19:14.458 "superblock": false, 00:19:14.458 "num_base_bdevs": 4, 00:19:14.458 "num_base_bdevs_discovered": 3, 00:19:14.458 "num_base_bdevs_operational": 4, 00:19:14.458 "base_bdevs_list": [ 00:19:14.458 { 00:19:14.458 "name": "BaseBdev1", 00:19:14.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.458 "is_configured": false, 00:19:14.458 "data_offset": 0, 00:19:14.458 "data_size": 0 00:19:14.458 }, 00:19:14.458 { 00:19:14.458 "name": "BaseBdev2", 00:19:14.458 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:14.458 "is_configured": true, 00:19:14.458 "data_offset": 0, 00:19:14.458 "data_size": 65536 00:19:14.458 }, 00:19:14.458 { 00:19:14.458 "name": "BaseBdev3", 00:19:14.458 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:14.458 "is_configured": true, 00:19:14.458 "data_offset": 0, 00:19:14.458 "data_size": 65536 00:19:14.458 }, 00:19:14.458 { 00:19:14.458 "name": "BaseBdev4", 00:19:14.458 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:14.458 "is_configured": true, 00:19:14.458 "data_offset": 0, 00:19:14.458 "data_size": 65536 00:19:14.458 } 00:19:14.458 ] 00:19:14.458 }' 00:19:14.458 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.458 19:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.026 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:15.285 [2024-06-10 19:03:29.939466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.285 19:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.544 19:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.544 "name": "Existed_Raid", 00:19:15.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.544 "strip_size_kb": 64, 00:19:15.544 "state": "configuring", 00:19:15.544 "raid_level": "raid0", 00:19:15.544 "superblock": false, 00:19:15.544 "num_base_bdevs": 4, 00:19:15.544 "num_base_bdevs_discovered": 2, 00:19:15.544 "num_base_bdevs_operational": 4, 00:19:15.544 "base_bdevs_list": [ 00:19:15.544 { 00:19:15.544 "name": "BaseBdev1", 00:19:15.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.544 "is_configured": false, 00:19:15.544 "data_offset": 0, 00:19:15.544 "data_size": 0 00:19:15.544 }, 00:19:15.544 { 00:19:15.544 "name": null, 00:19:15.544 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:15.544 "is_configured": false, 00:19:15.544 "data_offset": 0, 00:19:15.544 "data_size": 65536 00:19:15.544 }, 00:19:15.544 { 00:19:15.544 "name": "BaseBdev3", 00:19:15.544 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:15.544 "is_configured": true, 00:19:15.544 "data_offset": 0, 00:19:15.544 "data_size": 65536 00:19:15.544 }, 00:19:15.544 { 00:19:15.544 "name": "BaseBdev4", 00:19:15.544 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:15.544 "is_configured": true, 00:19:15.544 "data_offset": 0, 00:19:15.544 "data_size": 65536 00:19:15.544 } 00:19:15.544 ] 00:19:15.544 }' 00:19:15.544 19:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.544 19:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 19:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.112 19:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:16.371 19:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:16.371 19:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:16.371 [2024-06-10 19:03:31.110136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.371 BaseBdev1 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.630 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:16.889 [ 00:19:16.889 { 00:19:16.889 "name": "BaseBdev1", 00:19:16.889 "aliases": [ 00:19:16.889 "596232c2-082d-4d5f-8057-2383e7fec73c" 00:19:16.889 ], 00:19:16.889 "product_name": "Malloc disk", 00:19:16.889 "block_size": 512, 00:19:16.889 "num_blocks": 65536, 00:19:16.889 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:16.889 "assigned_rate_limits": { 00:19:16.889 "rw_ios_per_sec": 0, 00:19:16.889 "rw_mbytes_per_sec": 0, 00:19:16.889 "r_mbytes_per_sec": 0, 00:19:16.889 "w_mbytes_per_sec": 0 00:19:16.889 }, 00:19:16.889 "claimed": true, 00:19:16.889 "claim_type": "exclusive_write", 00:19:16.889 "zoned": false, 00:19:16.889 "supported_io_types": { 00:19:16.889 "read": true, 00:19:16.889 "write": true, 00:19:16.889 "unmap": true, 00:19:16.889 "write_zeroes": true, 00:19:16.889 "flush": true, 00:19:16.889 "reset": true, 00:19:16.889 "compare": false, 00:19:16.889 "compare_and_write": false, 00:19:16.889 "abort": true, 00:19:16.889 "nvme_admin": false, 00:19:16.889 "nvme_io": false 00:19:16.889 }, 00:19:16.889 "memory_domains": [ 00:19:16.889 { 00:19:16.889 "dma_device_id": "system", 00:19:16.889 "dma_device_type": 1 00:19:16.889 }, 00:19:16.889 { 00:19:16.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.889 "dma_device_type": 2 00:19:16.889 } 00:19:16.889 ], 00:19:16.889 "driver_specific": {} 00:19:16.889 } 00:19:16.889 ] 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.889 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.149 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.149 "name": "Existed_Raid", 00:19:17.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.149 "strip_size_kb": 64, 00:19:17.149 "state": "configuring", 00:19:17.149 "raid_level": "raid0", 00:19:17.149 "superblock": false, 00:19:17.149 "num_base_bdevs": 4, 00:19:17.149 "num_base_bdevs_discovered": 3, 00:19:17.149 "num_base_bdevs_operational": 4, 00:19:17.149 "base_bdevs_list": [ 00:19:17.149 { 00:19:17.149 "name": "BaseBdev1", 00:19:17.149 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:17.149 "is_configured": true, 00:19:17.149 "data_offset": 0, 00:19:17.149 "data_size": 65536 00:19:17.149 }, 00:19:17.149 { 00:19:17.149 "name": null, 00:19:17.149 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:17.149 "is_configured": false, 00:19:17.149 "data_offset": 0, 00:19:17.149 "data_size": 65536 00:19:17.149 }, 00:19:17.149 { 00:19:17.149 "name": "BaseBdev3", 00:19:17.149 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:17.149 "is_configured": true, 00:19:17.149 "data_offset": 0, 00:19:17.149 "data_size": 65536 00:19:17.149 }, 00:19:17.149 { 00:19:17.149 "name": "BaseBdev4", 00:19:17.149 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:17.149 "is_configured": true, 00:19:17.149 "data_offset": 0, 00:19:17.149 "data_size": 65536 00:19:17.149 } 00:19:17.149 ] 00:19:17.149 }' 00:19:17.149 19:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.149 19:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.717 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.718 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:17.976 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:17.977 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:18.236 [2024-06-10 19:03:32.738450] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.236 "name": "Existed_Raid", 00:19:18.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.236 "strip_size_kb": 64, 00:19:18.236 "state": "configuring", 00:19:18.236 "raid_level": "raid0", 00:19:18.236 "superblock": false, 00:19:18.236 "num_base_bdevs": 4, 00:19:18.236 "num_base_bdevs_discovered": 2, 00:19:18.236 "num_base_bdevs_operational": 4, 00:19:18.236 "base_bdevs_list": [ 00:19:18.236 { 00:19:18.236 "name": "BaseBdev1", 00:19:18.236 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:18.236 "is_configured": true, 00:19:18.236 "data_offset": 0, 00:19:18.236 "data_size": 65536 00:19:18.236 }, 00:19:18.236 { 00:19:18.236 "name": null, 00:19:18.236 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:18.236 "is_configured": false, 00:19:18.236 "data_offset": 0, 00:19:18.236 "data_size": 65536 00:19:18.236 }, 00:19:18.236 { 00:19:18.236 "name": null, 00:19:18.236 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:18.236 "is_configured": false, 00:19:18.236 "data_offset": 0, 00:19:18.236 "data_size": 65536 00:19:18.236 }, 00:19:18.236 { 00:19:18.236 "name": "BaseBdev4", 00:19:18.236 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:18.236 "is_configured": true, 00:19:18.236 "data_offset": 0, 00:19:18.236 "data_size": 65536 00:19:18.236 } 00:19:18.236 ] 00:19:18.236 }' 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.236 19:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.805 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.805 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.064 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:19.064 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:19.324 [2024-06-10 19:03:33.977754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.324 19:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.583 19:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.583 "name": "Existed_Raid", 00:19:19.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.583 "strip_size_kb": 64, 00:19:19.583 "state": "configuring", 00:19:19.583 "raid_level": "raid0", 00:19:19.583 "superblock": false, 00:19:19.583 "num_base_bdevs": 4, 00:19:19.583 "num_base_bdevs_discovered": 3, 00:19:19.583 "num_base_bdevs_operational": 4, 00:19:19.583 "base_bdevs_list": [ 00:19:19.583 { 00:19:19.583 "name": "BaseBdev1", 00:19:19.583 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:19.583 "is_configured": true, 00:19:19.583 "data_offset": 0, 00:19:19.583 "data_size": 65536 00:19:19.583 }, 00:19:19.583 { 00:19:19.583 "name": null, 00:19:19.583 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:19.583 "is_configured": false, 00:19:19.583 "data_offset": 0, 00:19:19.583 "data_size": 65536 00:19:19.583 }, 00:19:19.583 { 00:19:19.583 "name": "BaseBdev3", 00:19:19.583 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:19.583 "is_configured": true, 00:19:19.583 "data_offset": 0, 00:19:19.583 "data_size": 65536 00:19:19.583 }, 00:19:19.583 { 00:19:19.583 "name": "BaseBdev4", 00:19:19.583 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:19.583 "is_configured": true, 00:19:19.583 "data_offset": 0, 00:19:19.583 "data_size": 65536 00:19:19.583 } 00:19:19.583 ] 00:19:19.583 }' 00:19:19.583 19:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.583 19:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.152 19:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.152 19:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:20.412 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:20.412 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:20.671 [2024-06-10 19:03:35.225062] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.671 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.930 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.930 "name": "Existed_Raid", 00:19:20.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.930 "strip_size_kb": 64, 00:19:20.930 "state": "configuring", 00:19:20.930 "raid_level": "raid0", 00:19:20.930 "superblock": false, 00:19:20.930 "num_base_bdevs": 4, 00:19:20.930 "num_base_bdevs_discovered": 2, 00:19:20.930 "num_base_bdevs_operational": 4, 00:19:20.930 "base_bdevs_list": [ 00:19:20.930 { 00:19:20.930 "name": null, 00:19:20.930 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:20.930 "is_configured": false, 00:19:20.930 "data_offset": 0, 00:19:20.930 "data_size": 65536 00:19:20.930 }, 00:19:20.930 { 00:19:20.930 "name": null, 00:19:20.930 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:20.930 "is_configured": false, 00:19:20.930 "data_offset": 0, 00:19:20.930 "data_size": 65536 00:19:20.930 }, 00:19:20.930 { 00:19:20.930 "name": "BaseBdev3", 00:19:20.930 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:20.930 "is_configured": true, 00:19:20.930 "data_offset": 0, 00:19:20.930 "data_size": 65536 00:19:20.930 }, 00:19:20.930 { 00:19:20.930 "name": "BaseBdev4", 00:19:20.930 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:20.930 "is_configured": true, 00:19:20.930 "data_offset": 0, 00:19:20.930 "data_size": 65536 00:19:20.930 } 00:19:20.930 ] 00:19:20.930 }' 00:19:20.930 19:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.930 19:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.498 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.498 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:21.757 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:21.757 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:21.757 [2024-06-10 19:03:36.498231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.017 "name": "Existed_Raid", 00:19:22.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.017 "strip_size_kb": 64, 00:19:22.017 "state": "configuring", 00:19:22.017 "raid_level": "raid0", 00:19:22.017 "superblock": false, 00:19:22.017 "num_base_bdevs": 4, 00:19:22.017 "num_base_bdevs_discovered": 3, 00:19:22.017 "num_base_bdevs_operational": 4, 00:19:22.017 "base_bdevs_list": [ 00:19:22.017 { 00:19:22.017 "name": null, 00:19:22.017 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:22.017 "is_configured": false, 00:19:22.017 "data_offset": 0, 00:19:22.017 "data_size": 65536 00:19:22.017 }, 00:19:22.017 { 00:19:22.017 "name": "BaseBdev2", 00:19:22.017 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:22.017 "is_configured": true, 00:19:22.017 "data_offset": 0, 00:19:22.017 "data_size": 65536 00:19:22.017 }, 00:19:22.017 { 00:19:22.017 "name": "BaseBdev3", 00:19:22.017 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:22.017 "is_configured": true, 00:19:22.017 "data_offset": 0, 00:19:22.017 "data_size": 65536 00:19:22.017 }, 00:19:22.017 { 00:19:22.017 "name": "BaseBdev4", 00:19:22.017 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:22.017 "is_configured": true, 00:19:22.017 "data_offset": 0, 00:19:22.017 "data_size": 65536 00:19:22.017 } 00:19:22.017 ] 00:19:22.017 }' 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.017 19:03:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.586 19:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.586 19:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.845 19:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:22.845 19:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.845 19:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:23.104 19:03:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 596232c2-082d-4d5f-8057-2383e7fec73c 00:19:23.362 [2024-06-10 19:03:37.993359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:23.362 [2024-06-10 19:03:37.993393] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb05d70 00:19:23.362 [2024-06-10 19:03:37.993401] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:23.362 [2024-06-10 19:03:37.993584] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0e860 00:19:23.362 [2024-06-10 19:03:37.993691] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb05d70 00:19:23.362 [2024-06-10 19:03:37.993700] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb05d70 00:19:23.362 [2024-06-10 19:03:37.993849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.362 NewBaseBdev 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:23.362 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.620 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:23.879 [ 00:19:23.879 { 00:19:23.879 "name": "NewBaseBdev", 00:19:23.879 "aliases": [ 00:19:23.879 "596232c2-082d-4d5f-8057-2383e7fec73c" 00:19:23.879 ], 00:19:23.879 "product_name": "Malloc disk", 00:19:23.879 "block_size": 512, 00:19:23.879 "num_blocks": 65536, 00:19:23.879 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:23.879 "assigned_rate_limits": { 00:19:23.879 "rw_ios_per_sec": 0, 00:19:23.879 "rw_mbytes_per_sec": 0, 00:19:23.879 "r_mbytes_per_sec": 0, 00:19:23.879 "w_mbytes_per_sec": 0 00:19:23.879 }, 00:19:23.879 "claimed": true, 00:19:23.879 "claim_type": "exclusive_write", 00:19:23.879 "zoned": false, 00:19:23.879 "supported_io_types": { 00:19:23.879 "read": true, 00:19:23.879 "write": true, 00:19:23.879 "unmap": true, 00:19:23.879 "write_zeroes": true, 00:19:23.879 "flush": true, 00:19:23.879 "reset": true, 00:19:23.879 "compare": false, 00:19:23.879 "compare_and_write": false, 00:19:23.879 "abort": true, 00:19:23.879 "nvme_admin": false, 00:19:23.879 "nvme_io": false 00:19:23.879 }, 00:19:23.879 "memory_domains": [ 00:19:23.879 { 00:19:23.879 "dma_device_id": "system", 00:19:23.879 "dma_device_type": 1 00:19:23.879 }, 00:19:23.879 { 00:19:23.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.879 "dma_device_type": 2 00:19:23.879 } 00:19:23.879 ], 00:19:23.879 "driver_specific": {} 00:19:23.879 } 00:19:23.879 ] 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.879 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.137 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.137 "name": "Existed_Raid", 00:19:24.137 "uuid": "e1fd8bf5-8b25-4761-9738-84bd49dce926", 00:19:24.137 "strip_size_kb": 64, 00:19:24.137 "state": "online", 00:19:24.137 "raid_level": "raid0", 00:19:24.137 "superblock": false, 00:19:24.137 "num_base_bdevs": 4, 00:19:24.137 "num_base_bdevs_discovered": 4, 00:19:24.137 "num_base_bdevs_operational": 4, 00:19:24.137 "base_bdevs_list": [ 00:19:24.137 { 00:19:24.137 "name": "NewBaseBdev", 00:19:24.137 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:24.137 "is_configured": true, 00:19:24.137 "data_offset": 0, 00:19:24.137 "data_size": 65536 00:19:24.137 }, 00:19:24.137 { 00:19:24.137 "name": "BaseBdev2", 00:19:24.137 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:24.137 "is_configured": true, 00:19:24.137 "data_offset": 0, 00:19:24.137 "data_size": 65536 00:19:24.138 }, 00:19:24.138 { 00:19:24.138 "name": "BaseBdev3", 00:19:24.138 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:24.138 "is_configured": true, 00:19:24.138 "data_offset": 0, 00:19:24.138 "data_size": 65536 00:19:24.138 }, 00:19:24.138 { 00:19:24.138 "name": "BaseBdev4", 00:19:24.138 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:24.138 "is_configured": true, 00:19:24.138 "data_offset": 0, 00:19:24.138 "data_size": 65536 00:19:24.138 } 00:19:24.138 ] 00:19:24.138 }' 00:19:24.138 19:03:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.138 19:03:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:24.705 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:24.963 [2024-06-10 19:03:39.477544] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.963 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:24.963 "name": "Existed_Raid", 00:19:24.963 "aliases": [ 00:19:24.963 "e1fd8bf5-8b25-4761-9738-84bd49dce926" 00:19:24.963 ], 00:19:24.963 "product_name": "Raid Volume", 00:19:24.963 "block_size": 512, 00:19:24.963 "num_blocks": 262144, 00:19:24.963 "uuid": "e1fd8bf5-8b25-4761-9738-84bd49dce926", 00:19:24.963 "assigned_rate_limits": { 00:19:24.963 "rw_ios_per_sec": 0, 00:19:24.963 "rw_mbytes_per_sec": 0, 00:19:24.963 "r_mbytes_per_sec": 0, 00:19:24.963 "w_mbytes_per_sec": 0 00:19:24.963 }, 00:19:24.963 "claimed": false, 00:19:24.963 "zoned": false, 00:19:24.963 "supported_io_types": { 00:19:24.963 "read": true, 00:19:24.963 "write": true, 00:19:24.963 "unmap": true, 00:19:24.963 "write_zeroes": true, 00:19:24.963 "flush": true, 00:19:24.963 "reset": true, 00:19:24.963 "compare": false, 00:19:24.963 "compare_and_write": false, 00:19:24.963 "abort": false, 00:19:24.963 "nvme_admin": false, 00:19:24.963 "nvme_io": false 00:19:24.963 }, 00:19:24.963 "memory_domains": [ 00:19:24.963 { 00:19:24.963 "dma_device_id": "system", 00:19:24.963 "dma_device_type": 1 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.963 "dma_device_type": 2 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "system", 00:19:24.963 "dma_device_type": 1 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.963 "dma_device_type": 2 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "system", 00:19:24.963 "dma_device_type": 1 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.963 "dma_device_type": 2 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "system", 00:19:24.963 "dma_device_type": 1 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.963 "dma_device_type": 2 00:19:24.963 } 00:19:24.963 ], 00:19:24.963 "driver_specific": { 00:19:24.963 "raid": { 00:19:24.963 "uuid": "e1fd8bf5-8b25-4761-9738-84bd49dce926", 00:19:24.963 "strip_size_kb": 64, 00:19:24.963 "state": "online", 00:19:24.963 "raid_level": "raid0", 00:19:24.963 "superblock": false, 00:19:24.963 "num_base_bdevs": 4, 00:19:24.963 "num_base_bdevs_discovered": 4, 00:19:24.963 "num_base_bdevs_operational": 4, 00:19:24.963 "base_bdevs_list": [ 00:19:24.963 { 00:19:24.963 "name": "NewBaseBdev", 00:19:24.963 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:24.963 "is_configured": true, 00:19:24.963 "data_offset": 0, 00:19:24.963 "data_size": 65536 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "name": "BaseBdev2", 00:19:24.963 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:24.963 "is_configured": true, 00:19:24.963 "data_offset": 0, 00:19:24.963 "data_size": 65536 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "name": "BaseBdev3", 00:19:24.963 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:24.963 "is_configured": true, 00:19:24.963 "data_offset": 0, 00:19:24.963 "data_size": 65536 00:19:24.963 }, 00:19:24.963 { 00:19:24.963 "name": "BaseBdev4", 00:19:24.963 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:24.963 "is_configured": true, 00:19:24.963 "data_offset": 0, 00:19:24.963 "data_size": 65536 00:19:24.963 } 00:19:24.963 ] 00:19:24.963 } 00:19:24.963 } 00:19:24.963 }' 00:19:24.963 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.963 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:24.963 BaseBdev2 00:19:24.963 BaseBdev3 00:19:24.963 BaseBdev4' 00:19:24.963 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.963 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:24.963 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.222 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.222 "name": "NewBaseBdev", 00:19:25.222 "aliases": [ 00:19:25.222 "596232c2-082d-4d5f-8057-2383e7fec73c" 00:19:25.222 ], 00:19:25.222 "product_name": "Malloc disk", 00:19:25.222 "block_size": 512, 00:19:25.222 "num_blocks": 65536, 00:19:25.222 "uuid": "596232c2-082d-4d5f-8057-2383e7fec73c", 00:19:25.222 "assigned_rate_limits": { 00:19:25.222 "rw_ios_per_sec": 0, 00:19:25.222 "rw_mbytes_per_sec": 0, 00:19:25.222 "r_mbytes_per_sec": 0, 00:19:25.222 "w_mbytes_per_sec": 0 00:19:25.222 }, 00:19:25.222 "claimed": true, 00:19:25.222 "claim_type": "exclusive_write", 00:19:25.222 "zoned": false, 00:19:25.222 "supported_io_types": { 00:19:25.222 "read": true, 00:19:25.222 "write": true, 00:19:25.222 "unmap": true, 00:19:25.222 "write_zeroes": true, 00:19:25.222 "flush": true, 00:19:25.222 "reset": true, 00:19:25.222 "compare": false, 00:19:25.222 "compare_and_write": false, 00:19:25.222 "abort": true, 00:19:25.222 "nvme_admin": false, 00:19:25.222 "nvme_io": false 00:19:25.222 }, 00:19:25.222 "memory_domains": [ 00:19:25.222 { 00:19:25.222 "dma_device_id": "system", 00:19:25.222 "dma_device_type": 1 00:19:25.222 }, 00:19:25.222 { 00:19:25.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.222 "dma_device_type": 2 00:19:25.222 } 00:19:25.222 ], 00:19:25.222 "driver_specific": {} 00:19:25.222 }' 00:19:25.222 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.222 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.222 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.222 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.222 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.223 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.223 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.482 19:03:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:25.482 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.741 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.741 "name": "BaseBdev2", 00:19:25.741 "aliases": [ 00:19:25.741 "e4df5224-8cb6-4c9b-b7c6-96e2d33af812" 00:19:25.741 ], 00:19:25.741 "product_name": "Malloc disk", 00:19:25.741 "block_size": 512, 00:19:25.741 "num_blocks": 65536, 00:19:25.741 "uuid": "e4df5224-8cb6-4c9b-b7c6-96e2d33af812", 00:19:25.741 "assigned_rate_limits": { 00:19:25.741 "rw_ios_per_sec": 0, 00:19:25.741 "rw_mbytes_per_sec": 0, 00:19:25.741 "r_mbytes_per_sec": 0, 00:19:25.741 "w_mbytes_per_sec": 0 00:19:25.741 }, 00:19:25.741 "claimed": true, 00:19:25.741 "claim_type": "exclusive_write", 00:19:25.741 "zoned": false, 00:19:25.741 "supported_io_types": { 00:19:25.741 "read": true, 00:19:25.741 "write": true, 00:19:25.741 "unmap": true, 00:19:25.741 "write_zeroes": true, 00:19:25.741 "flush": true, 00:19:25.741 "reset": true, 00:19:25.741 "compare": false, 00:19:25.741 "compare_and_write": false, 00:19:25.741 "abort": true, 00:19:25.741 "nvme_admin": false, 00:19:25.741 "nvme_io": false 00:19:25.741 }, 00:19:25.741 "memory_domains": [ 00:19:25.741 { 00:19:25.741 "dma_device_id": "system", 00:19:25.741 "dma_device_type": 1 00:19:25.741 }, 00:19:25.741 { 00:19:25.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.741 "dma_device_type": 2 00:19:25.741 } 00:19:25.741 ], 00:19:25.741 "driver_specific": {} 00:19:25.741 }' 00:19:25.741 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.741 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.741 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.741 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.741 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:26.000 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.259 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.259 "name": "BaseBdev3", 00:19:26.259 "aliases": [ 00:19:26.259 "71a7a7df-7ce3-497c-85e1-9c12028c0d68" 00:19:26.259 ], 00:19:26.259 "product_name": "Malloc disk", 00:19:26.259 "block_size": 512, 00:19:26.259 "num_blocks": 65536, 00:19:26.259 "uuid": "71a7a7df-7ce3-497c-85e1-9c12028c0d68", 00:19:26.259 "assigned_rate_limits": { 00:19:26.259 "rw_ios_per_sec": 0, 00:19:26.259 "rw_mbytes_per_sec": 0, 00:19:26.259 "r_mbytes_per_sec": 0, 00:19:26.259 "w_mbytes_per_sec": 0 00:19:26.259 }, 00:19:26.259 "claimed": true, 00:19:26.259 "claim_type": "exclusive_write", 00:19:26.259 "zoned": false, 00:19:26.259 "supported_io_types": { 00:19:26.259 "read": true, 00:19:26.259 "write": true, 00:19:26.259 "unmap": true, 00:19:26.259 "write_zeroes": true, 00:19:26.259 "flush": true, 00:19:26.259 "reset": true, 00:19:26.259 "compare": false, 00:19:26.259 "compare_and_write": false, 00:19:26.259 "abort": true, 00:19:26.259 "nvme_admin": false, 00:19:26.259 "nvme_io": false 00:19:26.259 }, 00:19:26.259 "memory_domains": [ 00:19:26.259 { 00:19:26.259 "dma_device_id": "system", 00:19:26.259 "dma_device_type": 1 00:19:26.259 }, 00:19:26.259 { 00:19:26.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.259 "dma_device_type": 2 00:19:26.259 } 00:19:26.259 ], 00:19:26.259 "driver_specific": {} 00:19:26.259 }' 00:19:26.259 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.259 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.259 19:03:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.259 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.518 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.777 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:26.777 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.777 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.777 "name": "BaseBdev4", 00:19:26.777 "aliases": [ 00:19:26.777 "d3ed02dd-24a9-4858-8be4-c89a17b05873" 00:19:26.777 ], 00:19:26.777 "product_name": "Malloc disk", 00:19:26.777 "block_size": 512, 00:19:26.777 "num_blocks": 65536, 00:19:26.777 "uuid": "d3ed02dd-24a9-4858-8be4-c89a17b05873", 00:19:26.777 "assigned_rate_limits": { 00:19:26.777 "rw_ios_per_sec": 0, 00:19:26.777 "rw_mbytes_per_sec": 0, 00:19:26.777 "r_mbytes_per_sec": 0, 00:19:26.777 "w_mbytes_per_sec": 0 00:19:26.777 }, 00:19:26.777 "claimed": true, 00:19:26.777 "claim_type": "exclusive_write", 00:19:26.777 "zoned": false, 00:19:26.777 "supported_io_types": { 00:19:26.777 "read": true, 00:19:26.777 "write": true, 00:19:26.777 "unmap": true, 00:19:26.777 "write_zeroes": true, 00:19:26.777 "flush": true, 00:19:26.777 "reset": true, 00:19:26.777 "compare": false, 00:19:26.777 "compare_and_write": false, 00:19:26.777 "abort": true, 00:19:26.777 "nvme_admin": false, 00:19:26.777 "nvme_io": false 00:19:26.777 }, 00:19:26.777 "memory_domains": [ 00:19:26.777 { 00:19:26.777 "dma_device_id": "system", 00:19:26.777 "dma_device_type": 1 00:19:26.777 }, 00:19:26.777 { 00:19:26.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.777 "dma_device_type": 2 00:19:26.777 } 00:19:26.777 ], 00:19:26.777 "driver_specific": {} 00:19:26.777 }' 00:19:26.777 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.777 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.036 19:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:27.294 [2024-06-10 19:03:42.003947] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.294 [2024-06-10 19:03:42.003972] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.294 [2024-06-10 19:03:42.004016] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.294 [2024-06-10 19:03:42.004069] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.294 [2024-06-10 19:03:42.004080] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb05d70 name Existed_Raid, state offline 00:19:27.294 19:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1691222 00:19:27.294 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1691222 ']' 00:19:27.294 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1691222 00:19:27.294 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:19:27.294 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:27.294 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1691222 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1691222' 00:19:27.597 killing process with pid 1691222 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1691222 00:19:27.597 [2024-06-10 19:03:42.093963] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1691222 00:19:27.597 [2024-06-10 19:03:42.125894] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.597 00:19:27.597 real 0m29.900s 00:19:27.597 user 0m54.876s 00:19:27.597 sys 0m5.329s 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:27.597 19:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.597 ************************************ 00:19:27.597 END TEST raid_state_function_test 00:19:27.597 ************************************ 00:19:27.857 19:03:42 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:19:27.857 19:03:42 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:19:27.857 19:03:42 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:27.857 19:03:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.857 ************************************ 00:19:27.857 START TEST raid_state_function_test_sb 00:19:27.857 ************************************ 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 4 true 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1696992 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1696992' 00:19:27.857 Process raid pid: 1696992 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1696992 /var/tmp/spdk-raid.sock 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1696992 ']' 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:27.857 19:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.857 [2024-06-10 19:03:42.465468] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:19:27.857 [2024-06-10 19:03:42.465522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.0 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.1 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.2 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.3 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.4 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.5 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.6 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:01.7 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.0 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.1 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.2 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.3 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.4 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.5 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.6 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b6:02.7 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.0 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.1 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.2 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.3 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.4 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.5 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.6 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:01.7 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.0 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.1 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.2 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.3 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.4 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.5 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.6 cannot be used 00:19:27.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.857 EAL: Requested device 0000:b8:02.7 cannot be used 00:19:27.857 [2024-06-10 19:03:42.600091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.116 [2024-06-10 19:03:42.687852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.116 [2024-06-10 19:03:42.746000] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.116 [2024-06-10 19:03:42.746035] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.747 19:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:28.747 19:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:19:28.747 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:29.006 [2024-06-10 19:03:43.577480] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.006 [2024-06-10 19:03:43.577519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.006 [2024-06-10 19:03:43.577529] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.006 [2024-06-10 19:03:43.577539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.006 [2024-06-10 19:03:43.577547] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.006 [2024-06-10 19:03:43.577557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.006 [2024-06-10 19:03:43.577565] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.006 [2024-06-10 19:03:43.577585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.006 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.265 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.265 "name": "Existed_Raid", 00:19:29.265 "uuid": "538ea024-2dd2-45f4-a6d9-c2c011c76f29", 00:19:29.265 "strip_size_kb": 64, 00:19:29.265 "state": "configuring", 00:19:29.265 "raid_level": "raid0", 00:19:29.265 "superblock": true, 00:19:29.265 "num_base_bdevs": 4, 00:19:29.265 "num_base_bdevs_discovered": 0, 00:19:29.265 "num_base_bdevs_operational": 4, 00:19:29.265 "base_bdevs_list": [ 00:19:29.265 { 00:19:29.265 "name": "BaseBdev1", 00:19:29.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.265 "is_configured": false, 00:19:29.265 "data_offset": 0, 00:19:29.265 "data_size": 0 00:19:29.265 }, 00:19:29.265 { 00:19:29.265 "name": "BaseBdev2", 00:19:29.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.265 "is_configured": false, 00:19:29.265 "data_offset": 0, 00:19:29.265 "data_size": 0 00:19:29.265 }, 00:19:29.265 { 00:19:29.265 "name": "BaseBdev3", 00:19:29.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.265 "is_configured": false, 00:19:29.265 "data_offset": 0, 00:19:29.265 "data_size": 0 00:19:29.265 }, 00:19:29.265 { 00:19:29.265 "name": "BaseBdev4", 00:19:29.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.265 "is_configured": false, 00:19:29.265 "data_offset": 0, 00:19:29.265 "data_size": 0 00:19:29.265 } 00:19:29.265 ] 00:19:29.265 }' 00:19:29.265 19:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.265 19:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.832 19:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:29.832 [2024-06-10 19:03:44.583969] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.832 [2024-06-10 19:03:44.583994] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x95ef50 name Existed_Raid, state configuring 00:19:30.092 19:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:30.092 [2024-06-10 19:03:44.812597] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.092 [2024-06-10 19:03:44.812620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.092 [2024-06-10 19:03:44.812628] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.092 [2024-06-10 19:03:44.812639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.092 [2024-06-10 19:03:44.812646] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.092 [2024-06-10 19:03:44.812657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.092 [2024-06-10 19:03:44.812664] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:30.092 [2024-06-10 19:03:44.812674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:30.092 19:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.352 [2024-06-10 19:03:45.050581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.352 BaseBdev1 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:30.352 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.611 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.870 [ 00:19:30.870 { 00:19:30.870 "name": "BaseBdev1", 00:19:30.870 "aliases": [ 00:19:30.870 "56aaa3a7-b02f-4df9-94c6-98f36ce62694" 00:19:30.870 ], 00:19:30.870 "product_name": "Malloc disk", 00:19:30.870 "block_size": 512, 00:19:30.870 "num_blocks": 65536, 00:19:30.870 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:30.870 "assigned_rate_limits": { 00:19:30.870 "rw_ios_per_sec": 0, 00:19:30.870 "rw_mbytes_per_sec": 0, 00:19:30.870 "r_mbytes_per_sec": 0, 00:19:30.870 "w_mbytes_per_sec": 0 00:19:30.870 }, 00:19:30.870 "claimed": true, 00:19:30.870 "claim_type": "exclusive_write", 00:19:30.870 "zoned": false, 00:19:30.870 "supported_io_types": { 00:19:30.870 "read": true, 00:19:30.870 "write": true, 00:19:30.870 "unmap": true, 00:19:30.870 "write_zeroes": true, 00:19:30.870 "flush": true, 00:19:30.870 "reset": true, 00:19:30.870 "compare": false, 00:19:30.870 "compare_and_write": false, 00:19:30.870 "abort": true, 00:19:30.870 "nvme_admin": false, 00:19:30.870 "nvme_io": false 00:19:30.870 }, 00:19:30.870 "memory_domains": [ 00:19:30.870 { 00:19:30.870 "dma_device_id": "system", 00:19:30.870 "dma_device_type": 1 00:19:30.870 }, 00:19:30.870 { 00:19:30.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.870 "dma_device_type": 2 00:19:30.870 } 00:19:30.870 ], 00:19:30.870 "driver_specific": {} 00:19:30.870 } 00:19:30.870 ] 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.870 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.130 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.130 "name": "Existed_Raid", 00:19:31.130 "uuid": "e922e966-57be-480b-b52b-ac701118ed5f", 00:19:31.130 "strip_size_kb": 64, 00:19:31.130 "state": "configuring", 00:19:31.130 "raid_level": "raid0", 00:19:31.130 "superblock": true, 00:19:31.130 "num_base_bdevs": 4, 00:19:31.130 "num_base_bdevs_discovered": 1, 00:19:31.130 "num_base_bdevs_operational": 4, 00:19:31.130 "base_bdevs_list": [ 00:19:31.130 { 00:19:31.130 "name": "BaseBdev1", 00:19:31.130 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:31.130 "is_configured": true, 00:19:31.130 "data_offset": 2048, 00:19:31.130 "data_size": 63488 00:19:31.130 }, 00:19:31.130 { 00:19:31.130 "name": "BaseBdev2", 00:19:31.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.130 "is_configured": false, 00:19:31.130 "data_offset": 0, 00:19:31.130 "data_size": 0 00:19:31.130 }, 00:19:31.130 { 00:19:31.130 "name": "BaseBdev3", 00:19:31.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.130 "is_configured": false, 00:19:31.130 "data_offset": 0, 00:19:31.130 "data_size": 0 00:19:31.130 }, 00:19:31.130 { 00:19:31.130 "name": "BaseBdev4", 00:19:31.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.130 "is_configured": false, 00:19:31.130 "data_offset": 0, 00:19:31.130 "data_size": 0 00:19:31.130 } 00:19:31.130 ] 00:19:31.130 }' 00:19:31.130 19:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.130 19:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.699 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:31.958 [2024-06-10 19:03:46.534494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.958 [2024-06-10 19:03:46.534524] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x95e7c0 name Existed_Raid, state configuring 00:19:31.958 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:32.218 [2024-06-10 19:03:46.763129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.218 [2024-06-10 19:03:46.764460] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.218 [2024-06-10 19:03:46.764489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.218 [2024-06-10 19:03:46.764499] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.218 [2024-06-10 19:03:46.764509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.218 [2024-06-10 19:03:46.764522] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:32.218 [2024-06-10 19:03:46.764532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.218 19:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.477 19:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.477 "name": "Existed_Raid", 00:19:32.477 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:32.477 "strip_size_kb": 64, 00:19:32.477 "state": "configuring", 00:19:32.477 "raid_level": "raid0", 00:19:32.477 "superblock": true, 00:19:32.477 "num_base_bdevs": 4, 00:19:32.477 "num_base_bdevs_discovered": 1, 00:19:32.477 "num_base_bdevs_operational": 4, 00:19:32.477 "base_bdevs_list": [ 00:19:32.477 { 00:19:32.477 "name": "BaseBdev1", 00:19:32.477 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:32.477 "is_configured": true, 00:19:32.477 "data_offset": 2048, 00:19:32.477 "data_size": 63488 00:19:32.477 }, 00:19:32.477 { 00:19:32.477 "name": "BaseBdev2", 00:19:32.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.477 "is_configured": false, 00:19:32.477 "data_offset": 0, 00:19:32.477 "data_size": 0 00:19:32.477 }, 00:19:32.477 { 00:19:32.477 "name": "BaseBdev3", 00:19:32.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.477 "is_configured": false, 00:19:32.477 "data_offset": 0, 00:19:32.477 "data_size": 0 00:19:32.477 }, 00:19:32.477 { 00:19:32.477 "name": "BaseBdev4", 00:19:32.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.478 "is_configured": false, 00:19:32.478 "data_offset": 0, 00:19:32.478 "data_size": 0 00:19:32.478 } 00:19:32.478 ] 00:19:32.478 }' 00:19:32.478 19:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.478 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.046 19:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.306 [2024-06-10 19:03:47.812974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.306 BaseBdev2 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:33.306 19:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.306 19:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.565 [ 00:19:33.565 { 00:19:33.565 "name": "BaseBdev2", 00:19:33.565 "aliases": [ 00:19:33.565 "6118c76b-a62a-4fd3-a7ee-cf3fb585f915" 00:19:33.565 ], 00:19:33.565 "product_name": "Malloc disk", 00:19:33.565 "block_size": 512, 00:19:33.565 "num_blocks": 65536, 00:19:33.565 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:33.565 "assigned_rate_limits": { 00:19:33.565 "rw_ios_per_sec": 0, 00:19:33.565 "rw_mbytes_per_sec": 0, 00:19:33.565 "r_mbytes_per_sec": 0, 00:19:33.565 "w_mbytes_per_sec": 0 00:19:33.565 }, 00:19:33.565 "claimed": true, 00:19:33.565 "claim_type": "exclusive_write", 00:19:33.565 "zoned": false, 00:19:33.565 "supported_io_types": { 00:19:33.565 "read": true, 00:19:33.565 "write": true, 00:19:33.565 "unmap": true, 00:19:33.565 "write_zeroes": true, 00:19:33.565 "flush": true, 00:19:33.565 "reset": true, 00:19:33.565 "compare": false, 00:19:33.565 "compare_and_write": false, 00:19:33.565 "abort": true, 00:19:33.565 "nvme_admin": false, 00:19:33.565 "nvme_io": false 00:19:33.565 }, 00:19:33.565 "memory_domains": [ 00:19:33.565 { 00:19:33.565 "dma_device_id": "system", 00:19:33.565 "dma_device_type": 1 00:19:33.565 }, 00:19:33.565 { 00:19:33.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.565 "dma_device_type": 2 00:19:33.565 } 00:19:33.565 ], 00:19:33.565 "driver_specific": {} 00:19:33.565 } 00:19:33.565 ] 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.565 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.825 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.825 "name": "Existed_Raid", 00:19:33.825 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:33.825 "strip_size_kb": 64, 00:19:33.825 "state": "configuring", 00:19:33.825 "raid_level": "raid0", 00:19:33.825 "superblock": true, 00:19:33.825 "num_base_bdevs": 4, 00:19:33.825 "num_base_bdevs_discovered": 2, 00:19:33.825 "num_base_bdevs_operational": 4, 00:19:33.825 "base_bdevs_list": [ 00:19:33.825 { 00:19:33.825 "name": "BaseBdev1", 00:19:33.825 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:33.825 "is_configured": true, 00:19:33.825 "data_offset": 2048, 00:19:33.825 "data_size": 63488 00:19:33.825 }, 00:19:33.825 { 00:19:33.825 "name": "BaseBdev2", 00:19:33.825 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:33.825 "is_configured": true, 00:19:33.825 "data_offset": 2048, 00:19:33.825 "data_size": 63488 00:19:33.825 }, 00:19:33.825 { 00:19:33.825 "name": "BaseBdev3", 00:19:33.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.825 "is_configured": false, 00:19:33.825 "data_offset": 0, 00:19:33.825 "data_size": 0 00:19:33.825 }, 00:19:33.825 { 00:19:33.825 "name": "BaseBdev4", 00:19:33.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.825 "is_configured": false, 00:19:33.825 "data_offset": 0, 00:19:33.825 "data_size": 0 00:19:33.825 } 00:19:33.825 ] 00:19:33.825 }' 00:19:33.825 19:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.825 19:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:34.652 [2024-06-10 19:03:49.308209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:34.652 BaseBdev3 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:34.652 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.912 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:35.171 [ 00:19:35.171 { 00:19:35.171 "name": "BaseBdev3", 00:19:35.171 "aliases": [ 00:19:35.171 "9c1d0baf-1985-478f-99db-5eaf7fae5463" 00:19:35.171 ], 00:19:35.171 "product_name": "Malloc disk", 00:19:35.171 "block_size": 512, 00:19:35.171 "num_blocks": 65536, 00:19:35.171 "uuid": "9c1d0baf-1985-478f-99db-5eaf7fae5463", 00:19:35.171 "assigned_rate_limits": { 00:19:35.171 "rw_ios_per_sec": 0, 00:19:35.171 "rw_mbytes_per_sec": 0, 00:19:35.171 "r_mbytes_per_sec": 0, 00:19:35.171 "w_mbytes_per_sec": 0 00:19:35.171 }, 00:19:35.171 "claimed": true, 00:19:35.171 "claim_type": "exclusive_write", 00:19:35.171 "zoned": false, 00:19:35.171 "supported_io_types": { 00:19:35.171 "read": true, 00:19:35.171 "write": true, 00:19:35.171 "unmap": true, 00:19:35.171 "write_zeroes": true, 00:19:35.171 "flush": true, 00:19:35.171 "reset": true, 00:19:35.171 "compare": false, 00:19:35.171 "compare_and_write": false, 00:19:35.171 "abort": true, 00:19:35.171 "nvme_admin": false, 00:19:35.171 "nvme_io": false 00:19:35.171 }, 00:19:35.171 "memory_domains": [ 00:19:35.171 { 00:19:35.171 "dma_device_id": "system", 00:19:35.171 "dma_device_type": 1 00:19:35.171 }, 00:19:35.171 { 00:19:35.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.171 "dma_device_type": 2 00:19:35.171 } 00:19:35.171 ], 00:19:35.171 "driver_specific": {} 00:19:35.171 } 00:19:35.171 ] 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.171 19:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.431 19:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.431 "name": "Existed_Raid", 00:19:35.431 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:35.431 "strip_size_kb": 64, 00:19:35.431 "state": "configuring", 00:19:35.431 "raid_level": "raid0", 00:19:35.431 "superblock": true, 00:19:35.431 "num_base_bdevs": 4, 00:19:35.431 "num_base_bdevs_discovered": 3, 00:19:35.431 "num_base_bdevs_operational": 4, 00:19:35.431 "base_bdevs_list": [ 00:19:35.431 { 00:19:35.431 "name": "BaseBdev1", 00:19:35.431 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:35.431 "is_configured": true, 00:19:35.431 "data_offset": 2048, 00:19:35.431 "data_size": 63488 00:19:35.431 }, 00:19:35.431 { 00:19:35.431 "name": "BaseBdev2", 00:19:35.431 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:35.431 "is_configured": true, 00:19:35.431 "data_offset": 2048, 00:19:35.431 "data_size": 63488 00:19:35.431 }, 00:19:35.431 { 00:19:35.431 "name": "BaseBdev3", 00:19:35.431 "uuid": "9c1d0baf-1985-478f-99db-5eaf7fae5463", 00:19:35.431 "is_configured": true, 00:19:35.431 "data_offset": 2048, 00:19:35.431 "data_size": 63488 00:19:35.431 }, 00:19:35.431 { 00:19:35.431 "name": "BaseBdev4", 00:19:35.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.431 "is_configured": false, 00:19:35.431 "data_offset": 0, 00:19:35.431 "data_size": 0 00:19:35.431 } 00:19:35.431 ] 00:19:35.431 }' 00:19:35.431 19:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.431 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.999 19:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:36.258 [2024-06-10 19:03:50.791407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:36.258 [2024-06-10 19:03:50.791555] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x95f820 00:19:36.258 [2024-06-10 19:03:50.791568] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:36.258 [2024-06-10 19:03:50.791734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x960470 00:19:36.258 [2024-06-10 19:03:50.791841] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x95f820 00:19:36.258 [2024-06-10 19:03:50.791850] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x95f820 00:19:36.258 [2024-06-10 19:03:50.791932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.258 BaseBdev4 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:36.258 19:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:36.518 [ 00:19:36.518 { 00:19:36.518 "name": "BaseBdev4", 00:19:36.518 "aliases": [ 00:19:36.518 "199f3799-96c6-40be-b02b-539c29503eb5" 00:19:36.518 ], 00:19:36.518 "product_name": "Malloc disk", 00:19:36.518 "block_size": 512, 00:19:36.518 "num_blocks": 65536, 00:19:36.518 "uuid": "199f3799-96c6-40be-b02b-539c29503eb5", 00:19:36.518 "assigned_rate_limits": { 00:19:36.518 "rw_ios_per_sec": 0, 00:19:36.518 "rw_mbytes_per_sec": 0, 00:19:36.518 "r_mbytes_per_sec": 0, 00:19:36.518 "w_mbytes_per_sec": 0 00:19:36.518 }, 00:19:36.518 "claimed": true, 00:19:36.518 "claim_type": "exclusive_write", 00:19:36.518 "zoned": false, 00:19:36.518 "supported_io_types": { 00:19:36.518 "read": true, 00:19:36.518 "write": true, 00:19:36.518 "unmap": true, 00:19:36.518 "write_zeroes": true, 00:19:36.518 "flush": true, 00:19:36.518 "reset": true, 00:19:36.518 "compare": false, 00:19:36.518 "compare_and_write": false, 00:19:36.518 "abort": true, 00:19:36.518 "nvme_admin": false, 00:19:36.518 "nvme_io": false 00:19:36.518 }, 00:19:36.518 "memory_domains": [ 00:19:36.518 { 00:19:36.518 "dma_device_id": "system", 00:19:36.518 "dma_device_type": 1 00:19:36.518 }, 00:19:36.518 { 00:19:36.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.518 "dma_device_type": 2 00:19:36.518 } 00:19:36.518 ], 00:19:36.518 "driver_specific": {} 00:19:36.518 } 00:19:36.518 ] 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.518 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.777 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.777 "name": "Existed_Raid", 00:19:36.777 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:36.777 "strip_size_kb": 64, 00:19:36.777 "state": "online", 00:19:36.777 "raid_level": "raid0", 00:19:36.777 "superblock": true, 00:19:36.777 "num_base_bdevs": 4, 00:19:36.777 "num_base_bdevs_discovered": 4, 00:19:36.777 "num_base_bdevs_operational": 4, 00:19:36.777 "base_bdevs_list": [ 00:19:36.777 { 00:19:36.777 "name": "BaseBdev1", 00:19:36.777 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:36.777 "is_configured": true, 00:19:36.777 "data_offset": 2048, 00:19:36.777 "data_size": 63488 00:19:36.777 }, 00:19:36.777 { 00:19:36.777 "name": "BaseBdev2", 00:19:36.777 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:36.777 "is_configured": true, 00:19:36.777 "data_offset": 2048, 00:19:36.777 "data_size": 63488 00:19:36.777 }, 00:19:36.777 { 00:19:36.777 "name": "BaseBdev3", 00:19:36.777 "uuid": "9c1d0baf-1985-478f-99db-5eaf7fae5463", 00:19:36.777 "is_configured": true, 00:19:36.777 "data_offset": 2048, 00:19:36.777 "data_size": 63488 00:19:36.777 }, 00:19:36.777 { 00:19:36.777 "name": "BaseBdev4", 00:19:36.777 "uuid": "199f3799-96c6-40be-b02b-539c29503eb5", 00:19:36.777 "is_configured": true, 00:19:36.777 "data_offset": 2048, 00:19:36.777 "data_size": 63488 00:19:36.777 } 00:19:36.777 ] 00:19:36.777 }' 00:19:36.777 19:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.777 19:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:37.346 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:37.605 [2024-06-10 19:03:52.279616] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.605 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:37.605 "name": "Existed_Raid", 00:19:37.605 "aliases": [ 00:19:37.605 "a62f2f12-be10-4619-a56d-af71f6366d80" 00:19:37.605 ], 00:19:37.605 "product_name": "Raid Volume", 00:19:37.605 "block_size": 512, 00:19:37.605 "num_blocks": 253952, 00:19:37.605 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:37.605 "assigned_rate_limits": { 00:19:37.605 "rw_ios_per_sec": 0, 00:19:37.605 "rw_mbytes_per_sec": 0, 00:19:37.605 "r_mbytes_per_sec": 0, 00:19:37.605 "w_mbytes_per_sec": 0 00:19:37.605 }, 00:19:37.605 "claimed": false, 00:19:37.605 "zoned": false, 00:19:37.605 "supported_io_types": { 00:19:37.605 "read": true, 00:19:37.605 "write": true, 00:19:37.605 "unmap": true, 00:19:37.605 "write_zeroes": true, 00:19:37.605 "flush": true, 00:19:37.605 "reset": true, 00:19:37.605 "compare": false, 00:19:37.605 "compare_and_write": false, 00:19:37.605 "abort": false, 00:19:37.605 "nvme_admin": false, 00:19:37.605 "nvme_io": false 00:19:37.605 }, 00:19:37.605 "memory_domains": [ 00:19:37.605 { 00:19:37.605 "dma_device_id": "system", 00:19:37.605 "dma_device_type": 1 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.605 "dma_device_type": 2 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "system", 00:19:37.605 "dma_device_type": 1 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.605 "dma_device_type": 2 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "system", 00:19:37.605 "dma_device_type": 1 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.605 "dma_device_type": 2 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "system", 00:19:37.605 "dma_device_type": 1 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.605 "dma_device_type": 2 00:19:37.605 } 00:19:37.605 ], 00:19:37.605 "driver_specific": { 00:19:37.605 "raid": { 00:19:37.605 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:37.605 "strip_size_kb": 64, 00:19:37.605 "state": "online", 00:19:37.605 "raid_level": "raid0", 00:19:37.605 "superblock": true, 00:19:37.605 "num_base_bdevs": 4, 00:19:37.605 "num_base_bdevs_discovered": 4, 00:19:37.605 "num_base_bdevs_operational": 4, 00:19:37.605 "base_bdevs_list": [ 00:19:37.605 { 00:19:37.605 "name": "BaseBdev1", 00:19:37.605 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:37.605 "is_configured": true, 00:19:37.605 "data_offset": 2048, 00:19:37.605 "data_size": 63488 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "name": "BaseBdev2", 00:19:37.605 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:37.605 "is_configured": true, 00:19:37.605 "data_offset": 2048, 00:19:37.605 "data_size": 63488 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "name": "BaseBdev3", 00:19:37.605 "uuid": "9c1d0baf-1985-478f-99db-5eaf7fae5463", 00:19:37.605 "is_configured": true, 00:19:37.605 "data_offset": 2048, 00:19:37.605 "data_size": 63488 00:19:37.605 }, 00:19:37.605 { 00:19:37.605 "name": "BaseBdev4", 00:19:37.605 "uuid": "199f3799-96c6-40be-b02b-539c29503eb5", 00:19:37.605 "is_configured": true, 00:19:37.605 "data_offset": 2048, 00:19:37.605 "data_size": 63488 00:19:37.605 } 00:19:37.605 ] 00:19:37.605 } 00:19:37.605 } 00:19:37.605 }' 00:19:37.605 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.605 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:37.605 BaseBdev2 00:19:37.605 BaseBdev3 00:19:37.605 BaseBdev4' 00:19:37.605 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.605 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:37.605 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.864 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.864 "name": "BaseBdev1", 00:19:37.864 "aliases": [ 00:19:37.864 "56aaa3a7-b02f-4df9-94c6-98f36ce62694" 00:19:37.864 ], 00:19:37.864 "product_name": "Malloc disk", 00:19:37.864 "block_size": 512, 00:19:37.864 "num_blocks": 65536, 00:19:37.864 "uuid": "56aaa3a7-b02f-4df9-94c6-98f36ce62694", 00:19:37.864 "assigned_rate_limits": { 00:19:37.864 "rw_ios_per_sec": 0, 00:19:37.864 "rw_mbytes_per_sec": 0, 00:19:37.864 "r_mbytes_per_sec": 0, 00:19:37.864 "w_mbytes_per_sec": 0 00:19:37.864 }, 00:19:37.864 "claimed": true, 00:19:37.864 "claim_type": "exclusive_write", 00:19:37.864 "zoned": false, 00:19:37.864 "supported_io_types": { 00:19:37.864 "read": true, 00:19:37.864 "write": true, 00:19:37.864 "unmap": true, 00:19:37.864 "write_zeroes": true, 00:19:37.864 "flush": true, 00:19:37.864 "reset": true, 00:19:37.864 "compare": false, 00:19:37.864 "compare_and_write": false, 00:19:37.864 "abort": true, 00:19:37.864 "nvme_admin": false, 00:19:37.864 "nvme_io": false 00:19:37.864 }, 00:19:37.864 "memory_domains": [ 00:19:37.864 { 00:19:37.864 "dma_device_id": "system", 00:19:37.864 "dma_device_type": 1 00:19:37.864 }, 00:19:37.864 { 00:19:37.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.864 "dma_device_type": 2 00:19:37.864 } 00:19:37.864 ], 00:19:37.864 "driver_specific": {} 00:19:37.864 }' 00:19:37.864 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.864 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.123 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.382 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:38.382 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:38.382 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:38.382 19:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:38.642 "name": "BaseBdev2", 00:19:38.642 "aliases": [ 00:19:38.642 "6118c76b-a62a-4fd3-a7ee-cf3fb585f915" 00:19:38.642 ], 00:19:38.642 "product_name": "Malloc disk", 00:19:38.642 "block_size": 512, 00:19:38.642 "num_blocks": 65536, 00:19:38.642 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:38.642 "assigned_rate_limits": { 00:19:38.642 "rw_ios_per_sec": 0, 00:19:38.642 "rw_mbytes_per_sec": 0, 00:19:38.642 "r_mbytes_per_sec": 0, 00:19:38.642 "w_mbytes_per_sec": 0 00:19:38.642 }, 00:19:38.642 "claimed": true, 00:19:38.642 "claim_type": "exclusive_write", 00:19:38.642 "zoned": false, 00:19:38.642 "supported_io_types": { 00:19:38.642 "read": true, 00:19:38.642 "write": true, 00:19:38.642 "unmap": true, 00:19:38.642 "write_zeroes": true, 00:19:38.642 "flush": true, 00:19:38.642 "reset": true, 00:19:38.642 "compare": false, 00:19:38.642 "compare_and_write": false, 00:19:38.642 "abort": true, 00:19:38.642 "nvme_admin": false, 00:19:38.642 "nvme_io": false 00:19:38.642 }, 00:19:38.642 "memory_domains": [ 00:19:38.642 { 00:19:38.642 "dma_device_id": "system", 00:19:38.642 "dma_device_type": 1 00:19:38.642 }, 00:19:38.642 { 00:19:38.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.642 "dma_device_type": 2 00:19:38.642 } 00:19:38.642 ], 00:19:38.642 "driver_specific": {} 00:19:38.642 }' 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.642 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:38.902 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.161 "name": "BaseBdev3", 00:19:39.161 "aliases": [ 00:19:39.161 "9c1d0baf-1985-478f-99db-5eaf7fae5463" 00:19:39.161 ], 00:19:39.161 "product_name": "Malloc disk", 00:19:39.161 "block_size": 512, 00:19:39.161 "num_blocks": 65536, 00:19:39.161 "uuid": "9c1d0baf-1985-478f-99db-5eaf7fae5463", 00:19:39.161 "assigned_rate_limits": { 00:19:39.161 "rw_ios_per_sec": 0, 00:19:39.161 "rw_mbytes_per_sec": 0, 00:19:39.161 "r_mbytes_per_sec": 0, 00:19:39.161 "w_mbytes_per_sec": 0 00:19:39.161 }, 00:19:39.161 "claimed": true, 00:19:39.161 "claim_type": "exclusive_write", 00:19:39.161 "zoned": false, 00:19:39.161 "supported_io_types": { 00:19:39.161 "read": true, 00:19:39.161 "write": true, 00:19:39.161 "unmap": true, 00:19:39.161 "write_zeroes": true, 00:19:39.161 "flush": true, 00:19:39.161 "reset": true, 00:19:39.161 "compare": false, 00:19:39.161 "compare_and_write": false, 00:19:39.161 "abort": true, 00:19:39.161 "nvme_admin": false, 00:19:39.161 "nvme_io": false 00:19:39.161 }, 00:19:39.161 "memory_domains": [ 00:19:39.161 { 00:19:39.161 "dma_device_id": "system", 00:19:39.161 "dma_device_type": 1 00:19:39.161 }, 00:19:39.161 { 00:19:39.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.161 "dma_device_type": 2 00:19:39.161 } 00:19:39.161 ], 00:19:39.161 "driver_specific": {} 00:19:39.161 }' 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.161 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.421 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.421 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.421 19:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.421 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.421 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.421 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.421 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.421 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:39.681 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.681 "name": "BaseBdev4", 00:19:39.681 "aliases": [ 00:19:39.681 "199f3799-96c6-40be-b02b-539c29503eb5" 00:19:39.681 ], 00:19:39.681 "product_name": "Malloc disk", 00:19:39.681 "block_size": 512, 00:19:39.681 "num_blocks": 65536, 00:19:39.681 "uuid": "199f3799-96c6-40be-b02b-539c29503eb5", 00:19:39.681 "assigned_rate_limits": { 00:19:39.681 "rw_ios_per_sec": 0, 00:19:39.681 "rw_mbytes_per_sec": 0, 00:19:39.681 "r_mbytes_per_sec": 0, 00:19:39.681 "w_mbytes_per_sec": 0 00:19:39.681 }, 00:19:39.681 "claimed": true, 00:19:39.681 "claim_type": "exclusive_write", 00:19:39.681 "zoned": false, 00:19:39.681 "supported_io_types": { 00:19:39.681 "read": true, 00:19:39.681 "write": true, 00:19:39.681 "unmap": true, 00:19:39.681 "write_zeroes": true, 00:19:39.681 "flush": true, 00:19:39.681 "reset": true, 00:19:39.681 "compare": false, 00:19:39.681 "compare_and_write": false, 00:19:39.681 "abort": true, 00:19:39.681 "nvme_admin": false, 00:19:39.681 "nvme_io": false 00:19:39.681 }, 00:19:39.681 "memory_domains": [ 00:19:39.681 { 00:19:39.681 "dma_device_id": "system", 00:19:39.681 "dma_device_type": 1 00:19:39.681 }, 00:19:39.681 { 00:19:39.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.681 "dma_device_type": 2 00:19:39.681 } 00:19:39.681 ], 00:19:39.681 "driver_specific": {} 00:19:39.681 }' 00:19:39.681 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.681 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.681 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.681 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.681 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.940 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:40.200 [2024-06-10 19:03:54.846139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:40.200 [2024-06-10 19:03:54.846161] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.200 [2024-06-10 19:03:54.846203] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.200 19:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.460 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.460 "name": "Existed_Raid", 00:19:40.460 "uuid": "a62f2f12-be10-4619-a56d-af71f6366d80", 00:19:40.460 "strip_size_kb": 64, 00:19:40.460 "state": "offline", 00:19:40.460 "raid_level": "raid0", 00:19:40.460 "superblock": true, 00:19:40.460 "num_base_bdevs": 4, 00:19:40.460 "num_base_bdevs_discovered": 3, 00:19:40.460 "num_base_bdevs_operational": 3, 00:19:40.460 "base_bdevs_list": [ 00:19:40.460 { 00:19:40.460 "name": null, 00:19:40.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.460 "is_configured": false, 00:19:40.460 "data_offset": 2048, 00:19:40.460 "data_size": 63488 00:19:40.460 }, 00:19:40.460 { 00:19:40.460 "name": "BaseBdev2", 00:19:40.460 "uuid": "6118c76b-a62a-4fd3-a7ee-cf3fb585f915", 00:19:40.460 "is_configured": true, 00:19:40.460 "data_offset": 2048, 00:19:40.460 "data_size": 63488 00:19:40.460 }, 00:19:40.460 { 00:19:40.460 "name": "BaseBdev3", 00:19:40.460 "uuid": "9c1d0baf-1985-478f-99db-5eaf7fae5463", 00:19:40.460 "is_configured": true, 00:19:40.460 "data_offset": 2048, 00:19:40.460 "data_size": 63488 00:19:40.460 }, 00:19:40.460 { 00:19:40.460 "name": "BaseBdev4", 00:19:40.460 "uuid": "199f3799-96c6-40be-b02b-539c29503eb5", 00:19:40.460 "is_configured": true, 00:19:40.460 "data_offset": 2048, 00:19:40.460 "data_size": 63488 00:19:40.460 } 00:19:40.460 ] 00:19:40.460 }' 00:19:40.460 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.460 19:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.028 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:41.029 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:41.029 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.029 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:41.288 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:41.288 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.288 19:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:41.548 [2024-06-10 19:03:56.094516] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:41.548 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:41.548 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:41.548 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.548 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:41.807 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:41.807 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.807 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:41.807 [2024-06-10 19:03:56.537596] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:42.066 19:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:42.325 [2024-06-10 19:03:56.988761] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:42.325 [2024-06-10 19:03:56.988799] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x95f820 name Existed_Raid, state offline 00:19:42.326 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:42.326 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:42.326 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.326 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:42.585 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:42.585 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:42.585 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:42.585 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:42.585 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:42.585 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:42.844 BaseBdev2 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:42.844 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.104 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:43.364 [ 00:19:43.364 { 00:19:43.364 "name": "BaseBdev2", 00:19:43.364 "aliases": [ 00:19:43.364 "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0" 00:19:43.364 ], 00:19:43.364 "product_name": "Malloc disk", 00:19:43.364 "block_size": 512, 00:19:43.364 "num_blocks": 65536, 00:19:43.364 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:43.364 "assigned_rate_limits": { 00:19:43.364 "rw_ios_per_sec": 0, 00:19:43.364 "rw_mbytes_per_sec": 0, 00:19:43.364 "r_mbytes_per_sec": 0, 00:19:43.364 "w_mbytes_per_sec": 0 00:19:43.364 }, 00:19:43.364 "claimed": false, 00:19:43.364 "zoned": false, 00:19:43.364 "supported_io_types": { 00:19:43.364 "read": true, 00:19:43.364 "write": true, 00:19:43.364 "unmap": true, 00:19:43.364 "write_zeroes": true, 00:19:43.364 "flush": true, 00:19:43.364 "reset": true, 00:19:43.364 "compare": false, 00:19:43.364 "compare_and_write": false, 00:19:43.364 "abort": true, 00:19:43.364 "nvme_admin": false, 00:19:43.364 "nvme_io": false 00:19:43.364 }, 00:19:43.364 "memory_domains": [ 00:19:43.364 { 00:19:43.364 "dma_device_id": "system", 00:19:43.364 "dma_device_type": 1 00:19:43.364 }, 00:19:43.364 { 00:19:43.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.364 "dma_device_type": 2 00:19:43.364 } 00:19:43.364 ], 00:19:43.364 "driver_specific": {} 00:19:43.364 } 00:19:43.364 ] 00:19:43.364 19:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:43.364 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:43.364 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:43.364 19:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:43.623 BaseBdev3 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.623 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:43.882 [ 00:19:43.882 { 00:19:43.882 "name": "BaseBdev3", 00:19:43.882 "aliases": [ 00:19:43.882 "87aa3fcd-8594-4a76-94be-46614f7da5ed" 00:19:43.882 ], 00:19:43.882 "product_name": "Malloc disk", 00:19:43.882 "block_size": 512, 00:19:43.882 "num_blocks": 65536, 00:19:43.882 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:43.882 "assigned_rate_limits": { 00:19:43.882 "rw_ios_per_sec": 0, 00:19:43.882 "rw_mbytes_per_sec": 0, 00:19:43.882 "r_mbytes_per_sec": 0, 00:19:43.882 "w_mbytes_per_sec": 0 00:19:43.882 }, 00:19:43.882 "claimed": false, 00:19:43.882 "zoned": false, 00:19:43.883 "supported_io_types": { 00:19:43.883 "read": true, 00:19:43.883 "write": true, 00:19:43.883 "unmap": true, 00:19:43.883 "write_zeroes": true, 00:19:43.883 "flush": true, 00:19:43.883 "reset": true, 00:19:43.883 "compare": false, 00:19:43.883 "compare_and_write": false, 00:19:43.883 "abort": true, 00:19:43.883 "nvme_admin": false, 00:19:43.883 "nvme_io": false 00:19:43.883 }, 00:19:43.883 "memory_domains": [ 00:19:43.883 { 00:19:43.883 "dma_device_id": "system", 00:19:43.883 "dma_device_type": 1 00:19:43.883 }, 00:19:43.883 { 00:19:43.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.883 "dma_device_type": 2 00:19:43.883 } 00:19:43.883 ], 00:19:43.883 "driver_specific": {} 00:19:43.883 } 00:19:43.883 ] 00:19:43.883 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:43.883 19:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:43.883 19:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:43.883 19:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:44.143 BaseBdev4 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:44.143 19:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:44.402 19:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:44.662 [ 00:19:44.662 { 00:19:44.662 "name": "BaseBdev4", 00:19:44.662 "aliases": [ 00:19:44.662 "b0987e05-ba9c-4718-ae32-08bc782aed11" 00:19:44.662 ], 00:19:44.662 "product_name": "Malloc disk", 00:19:44.662 "block_size": 512, 00:19:44.662 "num_blocks": 65536, 00:19:44.662 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:44.662 "assigned_rate_limits": { 00:19:44.662 "rw_ios_per_sec": 0, 00:19:44.662 "rw_mbytes_per_sec": 0, 00:19:44.662 "r_mbytes_per_sec": 0, 00:19:44.662 "w_mbytes_per_sec": 0 00:19:44.662 }, 00:19:44.662 "claimed": false, 00:19:44.662 "zoned": false, 00:19:44.662 "supported_io_types": { 00:19:44.662 "read": true, 00:19:44.662 "write": true, 00:19:44.662 "unmap": true, 00:19:44.662 "write_zeroes": true, 00:19:44.662 "flush": true, 00:19:44.662 "reset": true, 00:19:44.662 "compare": false, 00:19:44.662 "compare_and_write": false, 00:19:44.662 "abort": true, 00:19:44.662 "nvme_admin": false, 00:19:44.662 "nvme_io": false 00:19:44.662 }, 00:19:44.662 "memory_domains": [ 00:19:44.662 { 00:19:44.662 "dma_device_id": "system", 00:19:44.662 "dma_device_type": 1 00:19:44.662 }, 00:19:44.662 { 00:19:44.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.662 "dma_device_type": 2 00:19:44.662 } 00:19:44.662 ], 00:19:44.662 "driver_specific": {} 00:19:44.662 } 00:19:44.662 ] 00:19:44.662 19:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:44.662 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:44.662 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:44.662 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:44.922 [2024-06-10 19:03:59.458262] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:44.922 [2024-06-10 19:03:59.458300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:44.922 [2024-06-10 19:03:59.458318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.922 [2024-06-10 19:03:59.459526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.922 [2024-06-10 19:03:59.459565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.922 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.181 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.181 "name": "Existed_Raid", 00:19:45.182 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:45.182 "strip_size_kb": 64, 00:19:45.182 "state": "configuring", 00:19:45.182 "raid_level": "raid0", 00:19:45.182 "superblock": true, 00:19:45.182 "num_base_bdevs": 4, 00:19:45.182 "num_base_bdevs_discovered": 3, 00:19:45.182 "num_base_bdevs_operational": 4, 00:19:45.182 "base_bdevs_list": [ 00:19:45.182 { 00:19:45.182 "name": "BaseBdev1", 00:19:45.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.182 "is_configured": false, 00:19:45.182 "data_offset": 0, 00:19:45.182 "data_size": 0 00:19:45.182 }, 00:19:45.182 { 00:19:45.182 "name": "BaseBdev2", 00:19:45.182 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:45.182 "is_configured": true, 00:19:45.182 "data_offset": 2048, 00:19:45.182 "data_size": 63488 00:19:45.182 }, 00:19:45.182 { 00:19:45.182 "name": "BaseBdev3", 00:19:45.182 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:45.182 "is_configured": true, 00:19:45.182 "data_offset": 2048, 00:19:45.182 "data_size": 63488 00:19:45.182 }, 00:19:45.182 { 00:19:45.182 "name": "BaseBdev4", 00:19:45.182 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:45.182 "is_configured": true, 00:19:45.182 "data_offset": 2048, 00:19:45.182 "data_size": 63488 00:19:45.182 } 00:19:45.182 ] 00:19:45.182 }' 00:19:45.182 19:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.182 19:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.750 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:45.750 [2024-06-10 19:04:00.488945] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:45.750 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:45.751 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.751 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.751 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:45.751 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.751 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.010 "name": "Existed_Raid", 00:19:46.010 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:46.010 "strip_size_kb": 64, 00:19:46.010 "state": "configuring", 00:19:46.010 "raid_level": "raid0", 00:19:46.010 "superblock": true, 00:19:46.010 "num_base_bdevs": 4, 00:19:46.010 "num_base_bdevs_discovered": 2, 00:19:46.010 "num_base_bdevs_operational": 4, 00:19:46.010 "base_bdevs_list": [ 00:19:46.010 { 00:19:46.010 "name": "BaseBdev1", 00:19:46.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.010 "is_configured": false, 00:19:46.010 "data_offset": 0, 00:19:46.010 "data_size": 0 00:19:46.010 }, 00:19:46.010 { 00:19:46.010 "name": null, 00:19:46.010 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:46.010 "is_configured": false, 00:19:46.010 "data_offset": 2048, 00:19:46.010 "data_size": 63488 00:19:46.010 }, 00:19:46.010 { 00:19:46.010 "name": "BaseBdev3", 00:19:46.010 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:46.010 "is_configured": true, 00:19:46.010 "data_offset": 2048, 00:19:46.010 "data_size": 63488 00:19:46.010 }, 00:19:46.010 { 00:19:46.010 "name": "BaseBdev4", 00:19:46.010 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:46.010 "is_configured": true, 00:19:46.010 "data_offset": 2048, 00:19:46.010 "data_size": 63488 00:19:46.010 } 00:19:46.010 ] 00:19:46.010 }' 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.010 19:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.578 19:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.578 19:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:46.837 19:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:46.837 19:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.115 [2024-06-10 19:04:01.711410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.115 BaseBdev1 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:47.115 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.428 19:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.428 [ 00:19:47.428 { 00:19:47.428 "name": "BaseBdev1", 00:19:47.428 "aliases": [ 00:19:47.428 "227e27fb-26c8-4aa7-89fa-773d10c4ced3" 00:19:47.428 ], 00:19:47.428 "product_name": "Malloc disk", 00:19:47.428 "block_size": 512, 00:19:47.428 "num_blocks": 65536, 00:19:47.428 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:47.428 "assigned_rate_limits": { 00:19:47.428 "rw_ios_per_sec": 0, 00:19:47.428 "rw_mbytes_per_sec": 0, 00:19:47.428 "r_mbytes_per_sec": 0, 00:19:47.428 "w_mbytes_per_sec": 0 00:19:47.428 }, 00:19:47.428 "claimed": true, 00:19:47.428 "claim_type": "exclusive_write", 00:19:47.428 "zoned": false, 00:19:47.428 "supported_io_types": { 00:19:47.428 "read": true, 00:19:47.428 "write": true, 00:19:47.428 "unmap": true, 00:19:47.428 "write_zeroes": true, 00:19:47.428 "flush": true, 00:19:47.428 "reset": true, 00:19:47.428 "compare": false, 00:19:47.428 "compare_and_write": false, 00:19:47.428 "abort": true, 00:19:47.428 "nvme_admin": false, 00:19:47.428 "nvme_io": false 00:19:47.428 }, 00:19:47.428 "memory_domains": [ 00:19:47.428 { 00:19:47.428 "dma_device_id": "system", 00:19:47.428 "dma_device_type": 1 00:19:47.428 }, 00:19:47.428 { 00:19:47.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.428 "dma_device_type": 2 00:19:47.428 } 00:19:47.428 ], 00:19:47.428 "driver_specific": {} 00:19:47.428 } 00:19:47.428 ] 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.428 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.693 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.693 "name": "Existed_Raid", 00:19:47.693 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:47.693 "strip_size_kb": 64, 00:19:47.693 "state": "configuring", 00:19:47.693 "raid_level": "raid0", 00:19:47.693 "superblock": true, 00:19:47.693 "num_base_bdevs": 4, 00:19:47.693 "num_base_bdevs_discovered": 3, 00:19:47.693 "num_base_bdevs_operational": 4, 00:19:47.693 "base_bdevs_list": [ 00:19:47.693 { 00:19:47.693 "name": "BaseBdev1", 00:19:47.693 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:47.693 "is_configured": true, 00:19:47.693 "data_offset": 2048, 00:19:47.693 "data_size": 63488 00:19:47.693 }, 00:19:47.693 { 00:19:47.693 "name": null, 00:19:47.693 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:47.693 "is_configured": false, 00:19:47.693 "data_offset": 2048, 00:19:47.693 "data_size": 63488 00:19:47.693 }, 00:19:47.693 { 00:19:47.693 "name": "BaseBdev3", 00:19:47.693 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:47.693 "is_configured": true, 00:19:47.693 "data_offset": 2048, 00:19:47.693 "data_size": 63488 00:19:47.693 }, 00:19:47.693 { 00:19:47.693 "name": "BaseBdev4", 00:19:47.693 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:47.693 "is_configured": true, 00:19:47.693 "data_offset": 2048, 00:19:47.693 "data_size": 63488 00:19:47.693 } 00:19:47.693 ] 00:19:47.693 }' 00:19:47.693 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.693 19:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.261 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.261 19:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:48.520 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:48.520 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:48.780 [2024-06-10 19:04:03.399890] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.780 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.039 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.039 "name": "Existed_Raid", 00:19:49.039 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:49.039 "strip_size_kb": 64, 00:19:49.039 "state": "configuring", 00:19:49.039 "raid_level": "raid0", 00:19:49.039 "superblock": true, 00:19:49.039 "num_base_bdevs": 4, 00:19:49.039 "num_base_bdevs_discovered": 2, 00:19:49.039 "num_base_bdevs_operational": 4, 00:19:49.039 "base_bdevs_list": [ 00:19:49.039 { 00:19:49.039 "name": "BaseBdev1", 00:19:49.039 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:49.039 "is_configured": true, 00:19:49.039 "data_offset": 2048, 00:19:49.039 "data_size": 63488 00:19:49.039 }, 00:19:49.039 { 00:19:49.039 "name": null, 00:19:49.039 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:49.039 "is_configured": false, 00:19:49.039 "data_offset": 2048, 00:19:49.039 "data_size": 63488 00:19:49.039 }, 00:19:49.039 { 00:19:49.039 "name": null, 00:19:49.039 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:49.039 "is_configured": false, 00:19:49.039 "data_offset": 2048, 00:19:49.039 "data_size": 63488 00:19:49.039 }, 00:19:49.039 { 00:19:49.039 "name": "BaseBdev4", 00:19:49.039 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:49.039 "is_configured": true, 00:19:49.039 "data_offset": 2048, 00:19:49.039 "data_size": 63488 00:19:49.039 } 00:19:49.039 ] 00:19:49.039 }' 00:19:49.039 19:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.039 19:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.606 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.606 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.864 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:49.864 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:50.123 [2024-06-10 19:04:04.663253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.123 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.124 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.124 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.382 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.382 "name": "Existed_Raid", 00:19:50.382 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:50.382 "strip_size_kb": 64, 00:19:50.382 "state": "configuring", 00:19:50.382 "raid_level": "raid0", 00:19:50.382 "superblock": true, 00:19:50.382 "num_base_bdevs": 4, 00:19:50.382 "num_base_bdevs_discovered": 3, 00:19:50.382 "num_base_bdevs_operational": 4, 00:19:50.382 "base_bdevs_list": [ 00:19:50.382 { 00:19:50.382 "name": "BaseBdev1", 00:19:50.382 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:50.382 "is_configured": true, 00:19:50.382 "data_offset": 2048, 00:19:50.382 "data_size": 63488 00:19:50.382 }, 00:19:50.382 { 00:19:50.382 "name": null, 00:19:50.382 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:50.382 "is_configured": false, 00:19:50.382 "data_offset": 2048, 00:19:50.382 "data_size": 63488 00:19:50.382 }, 00:19:50.382 { 00:19:50.382 "name": "BaseBdev3", 00:19:50.382 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:50.382 "is_configured": true, 00:19:50.382 "data_offset": 2048, 00:19:50.382 "data_size": 63488 00:19:50.382 }, 00:19:50.382 { 00:19:50.382 "name": "BaseBdev4", 00:19:50.382 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:50.382 "is_configured": true, 00:19:50.382 "data_offset": 2048, 00:19:50.383 "data_size": 63488 00:19:50.383 } 00:19:50.383 ] 00:19:50.383 }' 00:19:50.383 19:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.383 19:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.950 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.950 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:50.950 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:50.950 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:51.210 [2024-06-10 19:04:05.858574] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.210 19:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.468 19:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.468 "name": "Existed_Raid", 00:19:51.468 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:51.468 "strip_size_kb": 64, 00:19:51.468 "state": "configuring", 00:19:51.468 "raid_level": "raid0", 00:19:51.468 "superblock": true, 00:19:51.468 "num_base_bdevs": 4, 00:19:51.468 "num_base_bdevs_discovered": 2, 00:19:51.468 "num_base_bdevs_operational": 4, 00:19:51.468 "base_bdevs_list": [ 00:19:51.468 { 00:19:51.468 "name": null, 00:19:51.468 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:51.468 "is_configured": false, 00:19:51.468 "data_offset": 2048, 00:19:51.468 "data_size": 63488 00:19:51.468 }, 00:19:51.468 { 00:19:51.468 "name": null, 00:19:51.468 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:51.468 "is_configured": false, 00:19:51.468 "data_offset": 2048, 00:19:51.468 "data_size": 63488 00:19:51.468 }, 00:19:51.469 { 00:19:51.469 "name": "BaseBdev3", 00:19:51.469 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:51.469 "is_configured": true, 00:19:51.469 "data_offset": 2048, 00:19:51.469 "data_size": 63488 00:19:51.469 }, 00:19:51.469 { 00:19:51.469 "name": "BaseBdev4", 00:19:51.469 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:51.469 "is_configured": true, 00:19:51.469 "data_offset": 2048, 00:19:51.469 "data_size": 63488 00:19:51.469 } 00:19:51.469 ] 00:19:51.469 }' 00:19:51.469 19:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.469 19:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.036 19:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.036 19:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:52.296 19:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:52.296 19:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:52.555 [2024-06-10 19:04:07.091720] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.555 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.815 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.815 "name": "Existed_Raid", 00:19:52.815 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:52.815 "strip_size_kb": 64, 00:19:52.815 "state": "configuring", 00:19:52.815 "raid_level": "raid0", 00:19:52.815 "superblock": true, 00:19:52.815 "num_base_bdevs": 4, 00:19:52.815 "num_base_bdevs_discovered": 3, 00:19:52.815 "num_base_bdevs_operational": 4, 00:19:52.815 "base_bdevs_list": [ 00:19:52.815 { 00:19:52.815 "name": null, 00:19:52.815 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:52.815 "is_configured": false, 00:19:52.815 "data_offset": 2048, 00:19:52.815 "data_size": 63488 00:19:52.815 }, 00:19:52.815 { 00:19:52.815 "name": "BaseBdev2", 00:19:52.815 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:52.815 "is_configured": true, 00:19:52.815 "data_offset": 2048, 00:19:52.815 "data_size": 63488 00:19:52.815 }, 00:19:52.815 { 00:19:52.815 "name": "BaseBdev3", 00:19:52.815 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:52.815 "is_configured": true, 00:19:52.815 "data_offset": 2048, 00:19:52.815 "data_size": 63488 00:19:52.815 }, 00:19:52.815 { 00:19:52.815 "name": "BaseBdev4", 00:19:52.815 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:52.815 "is_configured": true, 00:19:52.815 "data_offset": 2048, 00:19:52.815 "data_size": 63488 00:19:52.815 } 00:19:52.815 ] 00:19:52.815 }' 00:19:52.815 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.815 19:04:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.383 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.383 19:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:53.383 19:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:53.643 19:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.643 19:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:53.643 19:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 227e27fb-26c8-4aa7-89fa-773d10c4ced3 00:19:53.901 [2024-06-10 19:04:08.566687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:53.901 [2024-06-10 19:04:08.566823] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9618b0 00:19:53.901 [2024-06-10 19:04:08.566835] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:53.901 [2024-06-10 19:04:08.566995] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x958090 00:19:53.901 [2024-06-10 19:04:08.567107] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9618b0 00:19:53.901 [2024-06-10 19:04:08.567117] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9618b0 00:19:53.901 [2024-06-10 19:04:08.567199] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.901 NewBaseBdev 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:53.901 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.160 19:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:54.420 [ 00:19:54.420 { 00:19:54.420 "name": "NewBaseBdev", 00:19:54.420 "aliases": [ 00:19:54.420 "227e27fb-26c8-4aa7-89fa-773d10c4ced3" 00:19:54.420 ], 00:19:54.420 "product_name": "Malloc disk", 00:19:54.420 "block_size": 512, 00:19:54.420 "num_blocks": 65536, 00:19:54.420 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:54.420 "assigned_rate_limits": { 00:19:54.420 "rw_ios_per_sec": 0, 00:19:54.420 "rw_mbytes_per_sec": 0, 00:19:54.420 "r_mbytes_per_sec": 0, 00:19:54.420 "w_mbytes_per_sec": 0 00:19:54.420 }, 00:19:54.420 "claimed": true, 00:19:54.420 "claim_type": "exclusive_write", 00:19:54.420 "zoned": false, 00:19:54.420 "supported_io_types": { 00:19:54.420 "read": true, 00:19:54.420 "write": true, 00:19:54.420 "unmap": true, 00:19:54.420 "write_zeroes": true, 00:19:54.420 "flush": true, 00:19:54.420 "reset": true, 00:19:54.420 "compare": false, 00:19:54.420 "compare_and_write": false, 00:19:54.420 "abort": true, 00:19:54.420 "nvme_admin": false, 00:19:54.420 "nvme_io": false 00:19:54.420 }, 00:19:54.420 "memory_domains": [ 00:19:54.420 { 00:19:54.420 "dma_device_id": "system", 00:19:54.420 "dma_device_type": 1 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.420 "dma_device_type": 2 00:19:54.420 } 00:19:54.420 ], 00:19:54.420 "driver_specific": {} 00:19:54.420 } 00:19:54.420 ] 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.420 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.680 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.680 "name": "Existed_Raid", 00:19:54.680 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:54.680 "strip_size_kb": 64, 00:19:54.680 "state": "online", 00:19:54.680 "raid_level": "raid0", 00:19:54.680 "superblock": true, 00:19:54.680 "num_base_bdevs": 4, 00:19:54.680 "num_base_bdevs_discovered": 4, 00:19:54.680 "num_base_bdevs_operational": 4, 00:19:54.680 "base_bdevs_list": [ 00:19:54.680 { 00:19:54.680 "name": "NewBaseBdev", 00:19:54.680 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:54.680 "is_configured": true, 00:19:54.680 "data_offset": 2048, 00:19:54.680 "data_size": 63488 00:19:54.680 }, 00:19:54.680 { 00:19:54.680 "name": "BaseBdev2", 00:19:54.680 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:54.680 "is_configured": true, 00:19:54.680 "data_offset": 2048, 00:19:54.680 "data_size": 63488 00:19:54.680 }, 00:19:54.680 { 00:19:54.680 "name": "BaseBdev3", 00:19:54.680 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:54.680 "is_configured": true, 00:19:54.680 "data_offset": 2048, 00:19:54.680 "data_size": 63488 00:19:54.680 }, 00:19:54.680 { 00:19:54.680 "name": "BaseBdev4", 00:19:54.680 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:54.680 "is_configured": true, 00:19:54.680 "data_offset": 2048, 00:19:54.680 "data_size": 63488 00:19:54.680 } 00:19:54.680 ] 00:19:54.680 }' 00:19:54.680 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.680 19:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:55.248 19:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:55.248 [2024-06-10 19:04:09.998783] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.506 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:55.506 "name": "Existed_Raid", 00:19:55.506 "aliases": [ 00:19:55.506 "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93" 00:19:55.506 ], 00:19:55.506 "product_name": "Raid Volume", 00:19:55.506 "block_size": 512, 00:19:55.506 "num_blocks": 253952, 00:19:55.506 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:55.506 "assigned_rate_limits": { 00:19:55.506 "rw_ios_per_sec": 0, 00:19:55.506 "rw_mbytes_per_sec": 0, 00:19:55.506 "r_mbytes_per_sec": 0, 00:19:55.506 "w_mbytes_per_sec": 0 00:19:55.506 }, 00:19:55.506 "claimed": false, 00:19:55.506 "zoned": false, 00:19:55.506 "supported_io_types": { 00:19:55.506 "read": true, 00:19:55.506 "write": true, 00:19:55.506 "unmap": true, 00:19:55.506 "write_zeroes": true, 00:19:55.506 "flush": true, 00:19:55.506 "reset": true, 00:19:55.506 "compare": false, 00:19:55.506 "compare_and_write": false, 00:19:55.506 "abort": false, 00:19:55.506 "nvme_admin": false, 00:19:55.506 "nvme_io": false 00:19:55.506 }, 00:19:55.506 "memory_domains": [ 00:19:55.506 { 00:19:55.506 "dma_device_id": "system", 00:19:55.506 "dma_device_type": 1 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.506 "dma_device_type": 2 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "system", 00:19:55.506 "dma_device_type": 1 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.506 "dma_device_type": 2 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "system", 00:19:55.506 "dma_device_type": 1 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.506 "dma_device_type": 2 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "system", 00:19:55.506 "dma_device_type": 1 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.506 "dma_device_type": 2 00:19:55.506 } 00:19:55.506 ], 00:19:55.506 "driver_specific": { 00:19:55.506 "raid": { 00:19:55.506 "uuid": "f48594ec-c4e7-4bc1-8e68-d5f1fa5b0c93", 00:19:55.506 "strip_size_kb": 64, 00:19:55.506 "state": "online", 00:19:55.506 "raid_level": "raid0", 00:19:55.506 "superblock": true, 00:19:55.506 "num_base_bdevs": 4, 00:19:55.506 "num_base_bdevs_discovered": 4, 00:19:55.506 "num_base_bdevs_operational": 4, 00:19:55.506 "base_bdevs_list": [ 00:19:55.506 { 00:19:55.506 "name": "NewBaseBdev", 00:19:55.506 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:55.506 "is_configured": true, 00:19:55.506 "data_offset": 2048, 00:19:55.506 "data_size": 63488 00:19:55.506 }, 00:19:55.506 { 00:19:55.506 "name": "BaseBdev2", 00:19:55.506 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:55.506 "is_configured": true, 00:19:55.506 "data_offset": 2048, 00:19:55.506 "data_size": 63488 00:19:55.506 }, 00:19:55.506 { 00:19:55.507 "name": "BaseBdev3", 00:19:55.507 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:55.507 "is_configured": true, 00:19:55.507 "data_offset": 2048, 00:19:55.507 "data_size": 63488 00:19:55.507 }, 00:19:55.507 { 00:19:55.507 "name": "BaseBdev4", 00:19:55.507 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:55.507 "is_configured": true, 00:19:55.507 "data_offset": 2048, 00:19:55.507 "data_size": 63488 00:19:55.507 } 00:19:55.507 ] 00:19:55.507 } 00:19:55.507 } 00:19:55.507 }' 00:19:55.507 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.507 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:55.507 BaseBdev2 00:19:55.507 BaseBdev3 00:19:55.507 BaseBdev4' 00:19:55.507 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.507 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:55.507 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.765 "name": "NewBaseBdev", 00:19:55.765 "aliases": [ 00:19:55.765 "227e27fb-26c8-4aa7-89fa-773d10c4ced3" 00:19:55.765 ], 00:19:55.765 "product_name": "Malloc disk", 00:19:55.765 "block_size": 512, 00:19:55.765 "num_blocks": 65536, 00:19:55.765 "uuid": "227e27fb-26c8-4aa7-89fa-773d10c4ced3", 00:19:55.765 "assigned_rate_limits": { 00:19:55.765 "rw_ios_per_sec": 0, 00:19:55.765 "rw_mbytes_per_sec": 0, 00:19:55.765 "r_mbytes_per_sec": 0, 00:19:55.765 "w_mbytes_per_sec": 0 00:19:55.765 }, 00:19:55.765 "claimed": true, 00:19:55.765 "claim_type": "exclusive_write", 00:19:55.765 "zoned": false, 00:19:55.765 "supported_io_types": { 00:19:55.765 "read": true, 00:19:55.765 "write": true, 00:19:55.765 "unmap": true, 00:19:55.765 "write_zeroes": true, 00:19:55.765 "flush": true, 00:19:55.765 "reset": true, 00:19:55.765 "compare": false, 00:19:55.765 "compare_and_write": false, 00:19:55.765 "abort": true, 00:19:55.765 "nvme_admin": false, 00:19:55.765 "nvme_io": false 00:19:55.765 }, 00:19:55.765 "memory_domains": [ 00:19:55.765 { 00:19:55.765 "dma_device_id": "system", 00:19:55.765 "dma_device_type": 1 00:19:55.765 }, 00:19:55.765 { 00:19:55.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.765 "dma_device_type": 2 00:19:55.765 } 00:19:55.765 ], 00:19:55.765 "driver_specific": {} 00:19:55.765 }' 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.765 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:56.024 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.284 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.284 "name": "BaseBdev2", 00:19:56.284 "aliases": [ 00:19:56.284 "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0" 00:19:56.284 ], 00:19:56.284 "product_name": "Malloc disk", 00:19:56.284 "block_size": 512, 00:19:56.284 "num_blocks": 65536, 00:19:56.284 "uuid": "855b39ed-2ba8-45cc-8c33-e6ad8665a5b0", 00:19:56.284 "assigned_rate_limits": { 00:19:56.284 "rw_ios_per_sec": 0, 00:19:56.284 "rw_mbytes_per_sec": 0, 00:19:56.284 "r_mbytes_per_sec": 0, 00:19:56.284 "w_mbytes_per_sec": 0 00:19:56.284 }, 00:19:56.284 "claimed": true, 00:19:56.284 "claim_type": "exclusive_write", 00:19:56.284 "zoned": false, 00:19:56.284 "supported_io_types": { 00:19:56.284 "read": true, 00:19:56.284 "write": true, 00:19:56.284 "unmap": true, 00:19:56.284 "write_zeroes": true, 00:19:56.284 "flush": true, 00:19:56.284 "reset": true, 00:19:56.284 "compare": false, 00:19:56.284 "compare_and_write": false, 00:19:56.284 "abort": true, 00:19:56.284 "nvme_admin": false, 00:19:56.284 "nvme_io": false 00:19:56.284 }, 00:19:56.284 "memory_domains": [ 00:19:56.284 { 00:19:56.284 "dma_device_id": "system", 00:19:56.284 "dma_device_type": 1 00:19:56.284 }, 00:19:56.284 { 00:19:56.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.284 "dma_device_type": 2 00:19:56.284 } 00:19:56.284 ], 00:19:56.284 "driver_specific": {} 00:19:56.284 }' 00:19:56.284 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.284 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.284 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.284 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.284 19:04:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:56.543 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.802 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.802 "name": "BaseBdev3", 00:19:56.802 "aliases": [ 00:19:56.802 "87aa3fcd-8594-4a76-94be-46614f7da5ed" 00:19:56.802 ], 00:19:56.802 "product_name": "Malloc disk", 00:19:56.802 "block_size": 512, 00:19:56.802 "num_blocks": 65536, 00:19:56.802 "uuid": "87aa3fcd-8594-4a76-94be-46614f7da5ed", 00:19:56.802 "assigned_rate_limits": { 00:19:56.802 "rw_ios_per_sec": 0, 00:19:56.802 "rw_mbytes_per_sec": 0, 00:19:56.802 "r_mbytes_per_sec": 0, 00:19:56.802 "w_mbytes_per_sec": 0 00:19:56.802 }, 00:19:56.802 "claimed": true, 00:19:56.802 "claim_type": "exclusive_write", 00:19:56.802 "zoned": false, 00:19:56.802 "supported_io_types": { 00:19:56.802 "read": true, 00:19:56.802 "write": true, 00:19:56.802 "unmap": true, 00:19:56.802 "write_zeroes": true, 00:19:56.802 "flush": true, 00:19:56.802 "reset": true, 00:19:56.802 "compare": false, 00:19:56.802 "compare_and_write": false, 00:19:56.802 "abort": true, 00:19:56.802 "nvme_admin": false, 00:19:56.802 "nvme_io": false 00:19:56.802 }, 00:19:56.802 "memory_domains": [ 00:19:56.802 { 00:19:56.802 "dma_device_id": "system", 00:19:56.802 "dma_device_type": 1 00:19:56.802 }, 00:19:56.802 { 00:19:56.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.802 "dma_device_type": 2 00:19:56.802 } 00:19:56.802 ], 00:19:56.802 "driver_specific": {} 00:19:56.802 }' 00:19:56.802 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.802 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.802 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.802 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.061 19:04:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:57.321 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.321 "name": "BaseBdev4", 00:19:57.321 "aliases": [ 00:19:57.321 "b0987e05-ba9c-4718-ae32-08bc782aed11" 00:19:57.321 ], 00:19:57.321 "product_name": "Malloc disk", 00:19:57.321 "block_size": 512, 00:19:57.321 "num_blocks": 65536, 00:19:57.321 "uuid": "b0987e05-ba9c-4718-ae32-08bc782aed11", 00:19:57.321 "assigned_rate_limits": { 00:19:57.321 "rw_ios_per_sec": 0, 00:19:57.321 "rw_mbytes_per_sec": 0, 00:19:57.321 "r_mbytes_per_sec": 0, 00:19:57.321 "w_mbytes_per_sec": 0 00:19:57.321 }, 00:19:57.321 "claimed": true, 00:19:57.321 "claim_type": "exclusive_write", 00:19:57.321 "zoned": false, 00:19:57.321 "supported_io_types": { 00:19:57.321 "read": true, 00:19:57.321 "write": true, 00:19:57.321 "unmap": true, 00:19:57.321 "write_zeroes": true, 00:19:57.321 "flush": true, 00:19:57.321 "reset": true, 00:19:57.321 "compare": false, 00:19:57.321 "compare_and_write": false, 00:19:57.321 "abort": true, 00:19:57.321 "nvme_admin": false, 00:19:57.321 "nvme_io": false 00:19:57.321 }, 00:19:57.321 "memory_domains": [ 00:19:57.321 { 00:19:57.321 "dma_device_id": "system", 00:19:57.321 "dma_device_type": 1 00:19:57.321 }, 00:19:57.321 { 00:19:57.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.321 "dma_device_type": 2 00:19:57.321 } 00:19:57.321 ], 00:19:57.321 "driver_specific": {} 00:19:57.321 }' 00:19:57.321 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.321 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.579 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.837 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.837 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:57.837 [2024-06-10 19:04:12.581379] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.837 [2024-06-10 19:04:12.581401] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.837 [2024-06-10 19:04:12.581442] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.837 [2024-06-10 19:04:12.581494] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.837 [2024-06-10 19:04:12.581505] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9618b0 name Existed_Raid, state offline 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1696992 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1696992 ']' 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1696992 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1696992 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1696992' 00:19:58.096 killing process with pid 1696992 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1696992 00:19:58.096 [2024-06-10 19:04:12.656173] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.096 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1696992 00:19:58.096 [2024-06-10 19:04:12.687204] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.356 19:04:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:58.356 00:19:58.356 real 0m30.468s 00:19:58.356 user 0m55.802s 00:19:58.356 sys 0m5.652s 00:19:58.356 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:58.356 19:04:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.356 ************************************ 00:19:58.356 END TEST raid_state_function_test_sb 00:19:58.356 ************************************ 00:19:58.356 19:04:12 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:19:58.356 19:04:12 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:58.356 19:04:12 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:58.356 19:04:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.356 ************************************ 00:19:58.356 START TEST raid_superblock_test 00:19:58.356 ************************************ 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 4 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1702827 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1702827 /var/tmp/spdk-raid.sock 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1702827 ']' 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:58.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:58.356 19:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.356 [2024-06-10 19:04:13.018600] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:19:58.356 [2024-06-10 19:04:13.018656] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702827 ] 00:19:58.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.356 EAL: Requested device 0000:b6:01.0 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.1 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.2 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.3 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.4 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.5 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.6 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:01.7 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.0 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.1 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.2 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.3 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.4 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.5 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.6 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b6:02.7 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.0 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.1 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.2 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.3 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.4 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.5 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.6 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:01.7 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.0 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.1 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.2 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.3 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.4 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.5 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.6 cannot be used 00:19:58.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:58.357 EAL: Requested device 0000:b8:02.7 cannot be used 00:19:58.616 [2024-06-10 19:04:13.151565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.616 [2024-06-10 19:04:13.238638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.616 [2024-06-10 19:04:13.299256] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.616 [2024-06-10 19:04:13.299299] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:59.184 19:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:59.442 malloc1 00:19:59.442 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:59.701 [2024-06-10 19:04:14.348645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:59.701 [2024-06-10 19:04:14.348688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.701 [2024-06-10 19:04:14.348706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c80b70 00:19:59.701 [2024-06-10 19:04:14.348717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.701 [2024-06-10 19:04:14.350337] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.701 [2024-06-10 19:04:14.350365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:59.701 pt1 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:59.701 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:59.960 malloc2 00:19:59.960 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.219 [2024-06-10 19:04:14.794169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.219 [2024-06-10 19:04:14.794207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.219 [2024-06-10 19:04:14.794222] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c81f70 00:20:00.219 [2024-06-10 19:04:14.794233] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.219 [2024-06-10 19:04:14.795612] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.219 [2024-06-10 19:04:14.795637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.219 pt2 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:00.219 19:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:00.478 malloc3 00:20:00.478 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:00.737 [2024-06-10 19:04:15.243553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:00.737 [2024-06-10 19:04:15.243601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.737 [2024-06-10 19:04:15.243623] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e18940 00:20:00.737 [2024-06-10 19:04:15.243635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.737 [2024-06-10 19:04:15.245007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.737 [2024-06-10 19:04:15.245033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:00.737 pt3 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:00.737 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:00.737 malloc4 00:20:00.996 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:00.996 [2024-06-10 19:04:15.705113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:00.996 [2024-06-10 19:04:15.705151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.996 [2024-06-10 19:04:15.705167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c78900 00:20:00.996 [2024-06-10 19:04:15.705178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.996 [2024-06-10 19:04:15.706423] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.996 [2024-06-10 19:04:15.706450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:00.996 pt4 00:20:00.996 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:00.996 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:00.996 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:01.254 [2024-06-10 19:04:15.945758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.254 [2024-06-10 19:04:15.946905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:01.254 [2024-06-10 19:04:15.946954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:01.254 [2024-06-10 19:04:15.946992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:01.254 [2024-06-10 19:04:15.947143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c7a800 00:20:01.254 [2024-06-10 19:04:15.947154] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:01.254 [2024-06-10 19:04:15.947326] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c78b90 00:20:01.254 [2024-06-10 19:04:15.947456] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c7a800 00:20:01.254 [2024-06-10 19:04:15.947465] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c7a800 00:20:01.254 [2024-06-10 19:04:15.947549] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.254 19:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.513 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.513 "name": "raid_bdev1", 00:20:01.513 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:01.513 "strip_size_kb": 64, 00:20:01.513 "state": "online", 00:20:01.513 "raid_level": "raid0", 00:20:01.513 "superblock": true, 00:20:01.513 "num_base_bdevs": 4, 00:20:01.513 "num_base_bdevs_discovered": 4, 00:20:01.513 "num_base_bdevs_operational": 4, 00:20:01.513 "base_bdevs_list": [ 00:20:01.513 { 00:20:01.513 "name": "pt1", 00:20:01.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.513 "is_configured": true, 00:20:01.513 "data_offset": 2048, 00:20:01.513 "data_size": 63488 00:20:01.513 }, 00:20:01.513 { 00:20:01.513 "name": "pt2", 00:20:01.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.513 "is_configured": true, 00:20:01.513 "data_offset": 2048, 00:20:01.513 "data_size": 63488 00:20:01.513 }, 00:20:01.513 { 00:20:01.513 "name": "pt3", 00:20:01.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.513 "is_configured": true, 00:20:01.513 "data_offset": 2048, 00:20:01.513 "data_size": 63488 00:20:01.513 }, 00:20:01.513 { 00:20:01.513 "name": "pt4", 00:20:01.513 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:01.513 "is_configured": true, 00:20:01.513 "data_offset": 2048, 00:20:01.513 "data_size": 63488 00:20:01.513 } 00:20:01.513 ] 00:20:01.513 }' 00:20:01.513 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.513 19:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:02.081 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:02.340 [2024-06-10 19:04:16.976702] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.340 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:02.340 "name": "raid_bdev1", 00:20:02.340 "aliases": [ 00:20:02.340 "5b53af16-dc60-422f-bf1f-c79bf1f9acc7" 00:20:02.340 ], 00:20:02.340 "product_name": "Raid Volume", 00:20:02.340 "block_size": 512, 00:20:02.340 "num_blocks": 253952, 00:20:02.340 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:02.340 "assigned_rate_limits": { 00:20:02.340 "rw_ios_per_sec": 0, 00:20:02.340 "rw_mbytes_per_sec": 0, 00:20:02.340 "r_mbytes_per_sec": 0, 00:20:02.340 "w_mbytes_per_sec": 0 00:20:02.340 }, 00:20:02.340 "claimed": false, 00:20:02.340 "zoned": false, 00:20:02.340 "supported_io_types": { 00:20:02.340 "read": true, 00:20:02.340 "write": true, 00:20:02.340 "unmap": true, 00:20:02.340 "write_zeroes": true, 00:20:02.340 "flush": true, 00:20:02.340 "reset": true, 00:20:02.340 "compare": false, 00:20:02.340 "compare_and_write": false, 00:20:02.340 "abort": false, 00:20:02.340 "nvme_admin": false, 00:20:02.340 "nvme_io": false 00:20:02.340 }, 00:20:02.340 "memory_domains": [ 00:20:02.340 { 00:20:02.340 "dma_device_id": "system", 00:20:02.340 "dma_device_type": 1 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.340 "dma_device_type": 2 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "system", 00:20:02.340 "dma_device_type": 1 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.340 "dma_device_type": 2 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "system", 00:20:02.340 "dma_device_type": 1 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.340 "dma_device_type": 2 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "system", 00:20:02.340 "dma_device_type": 1 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.340 "dma_device_type": 2 00:20:02.340 } 00:20:02.340 ], 00:20:02.340 "driver_specific": { 00:20:02.340 "raid": { 00:20:02.340 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:02.340 "strip_size_kb": 64, 00:20:02.340 "state": "online", 00:20:02.340 "raid_level": "raid0", 00:20:02.340 "superblock": true, 00:20:02.340 "num_base_bdevs": 4, 00:20:02.340 "num_base_bdevs_discovered": 4, 00:20:02.340 "num_base_bdevs_operational": 4, 00:20:02.340 "base_bdevs_list": [ 00:20:02.340 { 00:20:02.340 "name": "pt1", 00:20:02.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.340 "is_configured": true, 00:20:02.340 "data_offset": 2048, 00:20:02.340 "data_size": 63488 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "name": "pt2", 00:20:02.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.340 "is_configured": true, 00:20:02.340 "data_offset": 2048, 00:20:02.340 "data_size": 63488 00:20:02.340 }, 00:20:02.340 { 00:20:02.340 "name": "pt3", 00:20:02.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.341 "is_configured": true, 00:20:02.341 "data_offset": 2048, 00:20:02.341 "data_size": 63488 00:20:02.341 }, 00:20:02.341 { 00:20:02.341 "name": "pt4", 00:20:02.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:02.341 "is_configured": true, 00:20:02.341 "data_offset": 2048, 00:20:02.341 "data_size": 63488 00:20:02.341 } 00:20:02.341 ] 00:20:02.341 } 00:20:02.341 } 00:20:02.341 }' 00:20:02.341 19:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:02.341 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:02.341 pt2 00:20:02.341 pt3 00:20:02.341 pt4' 00:20:02.341 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.341 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:02.341 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.600 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.600 "name": "pt1", 00:20:02.600 "aliases": [ 00:20:02.600 "00000000-0000-0000-0000-000000000001" 00:20:02.600 ], 00:20:02.600 "product_name": "passthru", 00:20:02.600 "block_size": 512, 00:20:02.600 "num_blocks": 65536, 00:20:02.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.600 "assigned_rate_limits": { 00:20:02.600 "rw_ios_per_sec": 0, 00:20:02.600 "rw_mbytes_per_sec": 0, 00:20:02.600 "r_mbytes_per_sec": 0, 00:20:02.600 "w_mbytes_per_sec": 0 00:20:02.600 }, 00:20:02.600 "claimed": true, 00:20:02.600 "claim_type": "exclusive_write", 00:20:02.600 "zoned": false, 00:20:02.600 "supported_io_types": { 00:20:02.600 "read": true, 00:20:02.600 "write": true, 00:20:02.600 "unmap": true, 00:20:02.600 "write_zeroes": true, 00:20:02.600 "flush": true, 00:20:02.600 "reset": true, 00:20:02.600 "compare": false, 00:20:02.600 "compare_and_write": false, 00:20:02.600 "abort": true, 00:20:02.600 "nvme_admin": false, 00:20:02.600 "nvme_io": false 00:20:02.600 }, 00:20:02.600 "memory_domains": [ 00:20:02.600 { 00:20:02.600 "dma_device_id": "system", 00:20:02.600 "dma_device_type": 1 00:20:02.600 }, 00:20:02.600 { 00:20:02.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.600 "dma_device_type": 2 00:20:02.600 } 00:20:02.600 ], 00:20:02.600 "driver_specific": { 00:20:02.600 "passthru": { 00:20:02.600 "name": "pt1", 00:20:02.600 "base_bdev_name": "malloc1" 00:20:02.600 } 00:20:02.600 } 00:20:02.600 }' 00:20:02.600 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.600 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.859 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.119 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.119 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.119 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:03.119 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.119 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.119 "name": "pt2", 00:20:03.119 "aliases": [ 00:20:03.119 "00000000-0000-0000-0000-000000000002" 00:20:03.119 ], 00:20:03.119 "product_name": "passthru", 00:20:03.119 "block_size": 512, 00:20:03.119 "num_blocks": 65536, 00:20:03.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.119 "assigned_rate_limits": { 00:20:03.119 "rw_ios_per_sec": 0, 00:20:03.119 "rw_mbytes_per_sec": 0, 00:20:03.119 "r_mbytes_per_sec": 0, 00:20:03.119 "w_mbytes_per_sec": 0 00:20:03.119 }, 00:20:03.119 "claimed": true, 00:20:03.119 "claim_type": "exclusive_write", 00:20:03.119 "zoned": false, 00:20:03.119 "supported_io_types": { 00:20:03.119 "read": true, 00:20:03.119 "write": true, 00:20:03.119 "unmap": true, 00:20:03.119 "write_zeroes": true, 00:20:03.119 "flush": true, 00:20:03.119 "reset": true, 00:20:03.119 "compare": false, 00:20:03.119 "compare_and_write": false, 00:20:03.119 "abort": true, 00:20:03.119 "nvme_admin": false, 00:20:03.119 "nvme_io": false 00:20:03.119 }, 00:20:03.119 "memory_domains": [ 00:20:03.119 { 00:20:03.119 "dma_device_id": "system", 00:20:03.119 "dma_device_type": 1 00:20:03.119 }, 00:20:03.119 { 00:20:03.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.119 "dma_device_type": 2 00:20:03.119 } 00:20:03.119 ], 00:20:03.119 "driver_specific": { 00:20:03.119 "passthru": { 00:20:03.119 "name": "pt2", 00:20:03.119 "base_bdev_name": "malloc2" 00:20:03.119 } 00:20:03.119 } 00:20:03.119 }' 00:20:03.119 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.378 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.378 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.378 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.378 19:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.378 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.378 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.378 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.378 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.378 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.637 "name": "pt3", 00:20:03.637 "aliases": [ 00:20:03.637 "00000000-0000-0000-0000-000000000003" 00:20:03.637 ], 00:20:03.637 "product_name": "passthru", 00:20:03.637 "block_size": 512, 00:20:03.637 "num_blocks": 65536, 00:20:03.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.637 "assigned_rate_limits": { 00:20:03.637 "rw_ios_per_sec": 0, 00:20:03.637 "rw_mbytes_per_sec": 0, 00:20:03.637 "r_mbytes_per_sec": 0, 00:20:03.637 "w_mbytes_per_sec": 0 00:20:03.637 }, 00:20:03.637 "claimed": true, 00:20:03.637 "claim_type": "exclusive_write", 00:20:03.637 "zoned": false, 00:20:03.637 "supported_io_types": { 00:20:03.637 "read": true, 00:20:03.637 "write": true, 00:20:03.637 "unmap": true, 00:20:03.637 "write_zeroes": true, 00:20:03.637 "flush": true, 00:20:03.637 "reset": true, 00:20:03.637 "compare": false, 00:20:03.637 "compare_and_write": false, 00:20:03.637 "abort": true, 00:20:03.637 "nvme_admin": false, 00:20:03.637 "nvme_io": false 00:20:03.637 }, 00:20:03.637 "memory_domains": [ 00:20:03.637 { 00:20:03.637 "dma_device_id": "system", 00:20:03.637 "dma_device_type": 1 00:20:03.637 }, 00:20:03.637 { 00:20:03.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.637 "dma_device_type": 2 00:20:03.637 } 00:20:03.637 ], 00:20:03.637 "driver_specific": { 00:20:03.637 "passthru": { 00:20:03.637 "name": "pt3", 00:20:03.637 "base_bdev_name": "malloc3" 00:20:03.637 } 00:20:03.637 } 00:20:03.637 }' 00:20:03.637 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.896 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.155 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.155 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.155 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.155 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.155 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:04.155 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.425 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.425 "name": "pt4", 00:20:04.425 "aliases": [ 00:20:04.425 "00000000-0000-0000-0000-000000000004" 00:20:04.425 ], 00:20:04.425 "product_name": "passthru", 00:20:04.425 "block_size": 512, 00:20:04.425 "num_blocks": 65536, 00:20:04.425 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:04.425 "assigned_rate_limits": { 00:20:04.425 "rw_ios_per_sec": 0, 00:20:04.425 "rw_mbytes_per_sec": 0, 00:20:04.425 "r_mbytes_per_sec": 0, 00:20:04.425 "w_mbytes_per_sec": 0 00:20:04.425 }, 00:20:04.425 "claimed": true, 00:20:04.425 "claim_type": "exclusive_write", 00:20:04.425 "zoned": false, 00:20:04.425 "supported_io_types": { 00:20:04.425 "read": true, 00:20:04.425 "write": true, 00:20:04.425 "unmap": true, 00:20:04.425 "write_zeroes": true, 00:20:04.425 "flush": true, 00:20:04.425 "reset": true, 00:20:04.425 "compare": false, 00:20:04.425 "compare_and_write": false, 00:20:04.425 "abort": true, 00:20:04.425 "nvme_admin": false, 00:20:04.425 "nvme_io": false 00:20:04.425 }, 00:20:04.425 "memory_domains": [ 00:20:04.425 { 00:20:04.425 "dma_device_id": "system", 00:20:04.425 "dma_device_type": 1 00:20:04.425 }, 00:20:04.425 { 00:20:04.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.425 "dma_device_type": 2 00:20:04.425 } 00:20:04.425 ], 00:20:04.425 "driver_specific": { 00:20:04.425 "passthru": { 00:20:04.425 "name": "pt4", 00:20:04.425 "base_bdev_name": "malloc4" 00:20:04.425 } 00:20:04.425 } 00:20:04.425 }' 00:20:04.425 19:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.425 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.425 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.425 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.425 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.425 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:04.425 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:04.685 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.944 [2024-06-10 19:04:19.523410] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.944 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5b53af16-dc60-422f-bf1f-c79bf1f9acc7 00:20:04.944 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 5b53af16-dc60-422f-bf1f-c79bf1f9acc7 ']' 00:20:04.944 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.203 [2024-06-10 19:04:19.751756] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.203 [2024-06-10 19:04:19.751770] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.203 [2024-06-10 19:04:19.751810] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.203 [2024-06-10 19:04:19.751864] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.203 [2024-06-10 19:04:19.751875] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c7a800 name raid_bdev1, state offline 00:20:05.203 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.203 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:05.462 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:05.462 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:05.462 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.462 19:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:05.721 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.721 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:05.721 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.721 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:06.013 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.013 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:06.275 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:06.275 19:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:06.534 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:06.535 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:06.794 [2024-06-10 19:04:21.343882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:06.794 [2024-06-10 19:04:21.345136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:06.794 [2024-06-10 19:04:21.345174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:06.794 [2024-06-10 19:04:21.345206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:06.794 [2024-06-10 19:04:21.345246] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:06.794 [2024-06-10 19:04:21.345283] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:06.794 [2024-06-10 19:04:21.345304] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:06.794 [2024-06-10 19:04:21.345326] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:06.794 [2024-06-10 19:04:21.345343] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.794 [2024-06-10 19:04:21.345353] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c81010 name raid_bdev1, state configuring 00:20:06.794 request: 00:20:06.794 { 00:20:06.794 "name": "raid_bdev1", 00:20:06.794 "raid_level": "raid0", 00:20:06.794 "base_bdevs": [ 00:20:06.794 "malloc1", 00:20:06.794 "malloc2", 00:20:06.794 "malloc3", 00:20:06.794 "malloc4" 00:20:06.794 ], 00:20:06.794 "superblock": false, 00:20:06.794 "strip_size_kb": 64, 00:20:06.794 "method": "bdev_raid_create", 00:20:06.794 "req_id": 1 00:20:06.794 } 00:20:06.794 Got JSON-RPC error response 00:20:06.794 response: 00:20:06.794 { 00:20:06.794 "code": -17, 00:20:06.794 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:06.794 } 00:20:06.794 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:20:06.794 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:06.794 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:06.794 19:04:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:06.794 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.794 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.054 [2024-06-10 19:04:21.784988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.054 [2024-06-10 19:04:21.785017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.054 [2024-06-10 19:04:21.785035] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c79900 00:20:07.054 [2024-06-10 19:04:21.785046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.054 [2024-06-10 19:04:21.786468] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.054 [2024-06-10 19:04:21.786500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.054 [2024-06-10 19:04:21.786555] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:07.054 [2024-06-10 19:04:21.786591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.054 pt1 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.054 19:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.314 19:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.314 "name": "raid_bdev1", 00:20:07.314 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:07.314 "strip_size_kb": 64, 00:20:07.314 "state": "configuring", 00:20:07.314 "raid_level": "raid0", 00:20:07.314 "superblock": true, 00:20:07.314 "num_base_bdevs": 4, 00:20:07.314 "num_base_bdevs_discovered": 1, 00:20:07.314 "num_base_bdevs_operational": 4, 00:20:07.314 "base_bdevs_list": [ 00:20:07.314 { 00:20:07.314 "name": "pt1", 00:20:07.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.314 "is_configured": true, 00:20:07.314 "data_offset": 2048, 00:20:07.314 "data_size": 63488 00:20:07.314 }, 00:20:07.314 { 00:20:07.314 "name": null, 00:20:07.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.314 "is_configured": false, 00:20:07.314 "data_offset": 2048, 00:20:07.314 "data_size": 63488 00:20:07.314 }, 00:20:07.314 { 00:20:07.314 "name": null, 00:20:07.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:07.314 "is_configured": false, 00:20:07.314 "data_offset": 2048, 00:20:07.314 "data_size": 63488 00:20:07.314 }, 00:20:07.314 { 00:20:07.314 "name": null, 00:20:07.314 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:07.314 "is_configured": false, 00:20:07.314 "data_offset": 2048, 00:20:07.314 "data_size": 63488 00:20:07.314 } 00:20:07.314 ] 00:20:07.314 }' 00:20:07.314 19:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.314 19:04:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.883 19:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:20:07.883 19:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.142 [2024-06-10 19:04:22.811713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.142 [2024-06-10 19:04:22.811756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.142 [2024-06-10 19:04:22.811772] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c78530 00:20:08.142 [2024-06-10 19:04:22.811784] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.142 [2024-06-10 19:04:22.812090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.142 [2024-06-10 19:04:22.812107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.142 [2024-06-10 19:04:22.812161] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:08.142 [2024-06-10 19:04:22.812178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.142 pt2 00:20:08.142 19:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:08.401 [2024-06-10 19:04:23.036316] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.401 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.402 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.402 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.402 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.661 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.661 "name": "raid_bdev1", 00:20:08.661 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:08.661 "strip_size_kb": 64, 00:20:08.661 "state": "configuring", 00:20:08.661 "raid_level": "raid0", 00:20:08.661 "superblock": true, 00:20:08.661 "num_base_bdevs": 4, 00:20:08.661 "num_base_bdevs_discovered": 1, 00:20:08.661 "num_base_bdevs_operational": 4, 00:20:08.661 "base_bdevs_list": [ 00:20:08.661 { 00:20:08.661 "name": "pt1", 00:20:08.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.661 "is_configured": true, 00:20:08.661 "data_offset": 2048, 00:20:08.661 "data_size": 63488 00:20:08.661 }, 00:20:08.661 { 00:20:08.661 "name": null, 00:20:08.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.661 "is_configured": false, 00:20:08.661 "data_offset": 2048, 00:20:08.661 "data_size": 63488 00:20:08.661 }, 00:20:08.661 { 00:20:08.661 "name": null, 00:20:08.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:08.661 "is_configured": false, 00:20:08.661 "data_offset": 2048, 00:20:08.661 "data_size": 63488 00:20:08.661 }, 00:20:08.661 { 00:20:08.661 "name": null, 00:20:08.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:08.661 "is_configured": false, 00:20:08.661 "data_offset": 2048, 00:20:08.661 "data_size": 63488 00:20:08.661 } 00:20:08.661 ] 00:20:08.661 }' 00:20:08.661 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.661 19:04:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.228 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:09.228 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:09.228 19:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.487 [2024-06-10 19:04:24.046975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.487 [2024-06-10 19:04:24.047024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.487 [2024-06-10 19:04:24.047041] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c77f70 00:20:09.487 [2024-06-10 19:04:24.047052] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.487 [2024-06-10 19:04:24.047359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.487 [2024-06-10 19:04:24.047375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.487 [2024-06-10 19:04:24.047430] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:09.487 [2024-06-10 19:04:24.047448] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.487 pt2 00:20:09.487 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:09.487 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:09.487 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:09.746 [2024-06-10 19:04:24.275581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:09.746 [2024-06-10 19:04:24.275615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.746 [2024-06-10 19:04:24.275632] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c781e0 00:20:09.746 [2024-06-10 19:04:24.275644] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.746 [2024-06-10 19:04:24.275908] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.746 [2024-06-10 19:04:24.275924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:09.746 [2024-06-10 19:04:24.275971] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:09.746 [2024-06-10 19:04:24.275987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:09.746 pt3 00:20:09.746 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:09.746 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:09.746 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:09.746 [2024-06-10 19:04:24.500175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:09.746 [2024-06-10 19:04:24.500207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.746 [2024-06-10 19:04:24.500221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c776e0 00:20:09.746 [2024-06-10 19:04:24.500233] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.746 [2024-06-10 19:04:24.500481] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.746 [2024-06-10 19:04:24.500498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:09.746 [2024-06-10 19:04:24.500542] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:09.746 [2024-06-10 19:04:24.500557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:09.746 [2024-06-10 19:04:24.500670] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c79ba0 00:20:09.746 [2024-06-10 19:04:24.500680] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:09.746 [2024-06-10 19:04:24.500831] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c7e9c0 00:20:09.746 [2024-06-10 19:04:24.500943] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c79ba0 00:20:09.746 [2024-06-10 19:04:24.500952] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c79ba0 00:20:09.746 [2024-06-10 19:04:24.501038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.006 pt4 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.006 "name": "raid_bdev1", 00:20:10.006 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:10.006 "strip_size_kb": 64, 00:20:10.006 "state": "online", 00:20:10.006 "raid_level": "raid0", 00:20:10.006 "superblock": true, 00:20:10.006 "num_base_bdevs": 4, 00:20:10.006 "num_base_bdevs_discovered": 4, 00:20:10.006 "num_base_bdevs_operational": 4, 00:20:10.006 "base_bdevs_list": [ 00:20:10.006 { 00:20:10.006 "name": "pt1", 00:20:10.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.006 "is_configured": true, 00:20:10.006 "data_offset": 2048, 00:20:10.006 "data_size": 63488 00:20:10.006 }, 00:20:10.006 { 00:20:10.006 "name": "pt2", 00:20:10.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.006 "is_configured": true, 00:20:10.006 "data_offset": 2048, 00:20:10.006 "data_size": 63488 00:20:10.006 }, 00:20:10.006 { 00:20:10.006 "name": "pt3", 00:20:10.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:10.006 "is_configured": true, 00:20:10.006 "data_offset": 2048, 00:20:10.006 "data_size": 63488 00:20:10.006 }, 00:20:10.006 { 00:20:10.006 "name": "pt4", 00:20:10.006 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:10.006 "is_configured": true, 00:20:10.006 "data_offset": 2048, 00:20:10.006 "data_size": 63488 00:20:10.006 } 00:20:10.006 ] 00:20:10.006 }' 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.006 19:04:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:10.944 [2024-06-10 19:04:25.547205] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:10.944 "name": "raid_bdev1", 00:20:10.944 "aliases": [ 00:20:10.944 "5b53af16-dc60-422f-bf1f-c79bf1f9acc7" 00:20:10.944 ], 00:20:10.944 "product_name": "Raid Volume", 00:20:10.944 "block_size": 512, 00:20:10.944 "num_blocks": 253952, 00:20:10.944 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:10.944 "assigned_rate_limits": { 00:20:10.944 "rw_ios_per_sec": 0, 00:20:10.944 "rw_mbytes_per_sec": 0, 00:20:10.944 "r_mbytes_per_sec": 0, 00:20:10.944 "w_mbytes_per_sec": 0 00:20:10.944 }, 00:20:10.944 "claimed": false, 00:20:10.944 "zoned": false, 00:20:10.944 "supported_io_types": { 00:20:10.944 "read": true, 00:20:10.944 "write": true, 00:20:10.944 "unmap": true, 00:20:10.944 "write_zeroes": true, 00:20:10.944 "flush": true, 00:20:10.944 "reset": true, 00:20:10.944 "compare": false, 00:20:10.944 "compare_and_write": false, 00:20:10.944 "abort": false, 00:20:10.944 "nvme_admin": false, 00:20:10.944 "nvme_io": false 00:20:10.944 }, 00:20:10.944 "memory_domains": [ 00:20:10.944 { 00:20:10.944 "dma_device_id": "system", 00:20:10.944 "dma_device_type": 1 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.944 "dma_device_type": 2 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "system", 00:20:10.944 "dma_device_type": 1 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.944 "dma_device_type": 2 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "system", 00:20:10.944 "dma_device_type": 1 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.944 "dma_device_type": 2 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "system", 00:20:10.944 "dma_device_type": 1 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.944 "dma_device_type": 2 00:20:10.944 } 00:20:10.944 ], 00:20:10.944 "driver_specific": { 00:20:10.944 "raid": { 00:20:10.944 "uuid": "5b53af16-dc60-422f-bf1f-c79bf1f9acc7", 00:20:10.944 "strip_size_kb": 64, 00:20:10.944 "state": "online", 00:20:10.944 "raid_level": "raid0", 00:20:10.944 "superblock": true, 00:20:10.944 "num_base_bdevs": 4, 00:20:10.944 "num_base_bdevs_discovered": 4, 00:20:10.944 "num_base_bdevs_operational": 4, 00:20:10.944 "base_bdevs_list": [ 00:20:10.944 { 00:20:10.944 "name": "pt1", 00:20:10.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.944 "is_configured": true, 00:20:10.944 "data_offset": 2048, 00:20:10.944 "data_size": 63488 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "name": "pt2", 00:20:10.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.944 "is_configured": true, 00:20:10.944 "data_offset": 2048, 00:20:10.944 "data_size": 63488 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "name": "pt3", 00:20:10.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:10.944 "is_configured": true, 00:20:10.944 "data_offset": 2048, 00:20:10.944 "data_size": 63488 00:20:10.944 }, 00:20:10.944 { 00:20:10.944 "name": "pt4", 00:20:10.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:10.944 "is_configured": true, 00:20:10.944 "data_offset": 2048, 00:20:10.944 "data_size": 63488 00:20:10.944 } 00:20:10.944 ] 00:20:10.944 } 00:20:10.944 } 00:20:10.944 }' 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:10.944 pt2 00:20:10.944 pt3 00:20:10.944 pt4' 00:20:10.944 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.945 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:10.945 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.203 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.203 "name": "pt1", 00:20:11.203 "aliases": [ 00:20:11.203 "00000000-0000-0000-0000-000000000001" 00:20:11.203 ], 00:20:11.203 "product_name": "passthru", 00:20:11.203 "block_size": 512, 00:20:11.203 "num_blocks": 65536, 00:20:11.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.203 "assigned_rate_limits": { 00:20:11.203 "rw_ios_per_sec": 0, 00:20:11.203 "rw_mbytes_per_sec": 0, 00:20:11.203 "r_mbytes_per_sec": 0, 00:20:11.203 "w_mbytes_per_sec": 0 00:20:11.203 }, 00:20:11.203 "claimed": true, 00:20:11.203 "claim_type": "exclusive_write", 00:20:11.203 "zoned": false, 00:20:11.203 "supported_io_types": { 00:20:11.203 "read": true, 00:20:11.203 "write": true, 00:20:11.203 "unmap": true, 00:20:11.203 "write_zeroes": true, 00:20:11.203 "flush": true, 00:20:11.203 "reset": true, 00:20:11.203 "compare": false, 00:20:11.203 "compare_and_write": false, 00:20:11.203 "abort": true, 00:20:11.203 "nvme_admin": false, 00:20:11.203 "nvme_io": false 00:20:11.203 }, 00:20:11.203 "memory_domains": [ 00:20:11.203 { 00:20:11.203 "dma_device_id": "system", 00:20:11.203 "dma_device_type": 1 00:20:11.203 }, 00:20:11.203 { 00:20:11.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.203 "dma_device_type": 2 00:20:11.203 } 00:20:11.203 ], 00:20:11.203 "driver_specific": { 00:20:11.203 "passthru": { 00:20:11.203 "name": "pt1", 00:20:11.203 "base_bdev_name": "malloc1" 00:20:11.203 } 00:20:11.203 } 00:20:11.203 }' 00:20:11.203 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.203 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.203 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.203 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.462 19:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:11.462 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.722 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.722 "name": "pt2", 00:20:11.722 "aliases": [ 00:20:11.722 "00000000-0000-0000-0000-000000000002" 00:20:11.722 ], 00:20:11.722 "product_name": "passthru", 00:20:11.722 "block_size": 512, 00:20:11.722 "num_blocks": 65536, 00:20:11.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.722 "assigned_rate_limits": { 00:20:11.722 "rw_ios_per_sec": 0, 00:20:11.722 "rw_mbytes_per_sec": 0, 00:20:11.722 "r_mbytes_per_sec": 0, 00:20:11.722 "w_mbytes_per_sec": 0 00:20:11.722 }, 00:20:11.722 "claimed": true, 00:20:11.722 "claim_type": "exclusive_write", 00:20:11.722 "zoned": false, 00:20:11.722 "supported_io_types": { 00:20:11.722 "read": true, 00:20:11.722 "write": true, 00:20:11.722 "unmap": true, 00:20:11.722 "write_zeroes": true, 00:20:11.722 "flush": true, 00:20:11.722 "reset": true, 00:20:11.722 "compare": false, 00:20:11.722 "compare_and_write": false, 00:20:11.722 "abort": true, 00:20:11.722 "nvme_admin": false, 00:20:11.722 "nvme_io": false 00:20:11.722 }, 00:20:11.722 "memory_domains": [ 00:20:11.722 { 00:20:11.722 "dma_device_id": "system", 00:20:11.722 "dma_device_type": 1 00:20:11.722 }, 00:20:11.722 { 00:20:11.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.722 "dma_device_type": 2 00:20:11.722 } 00:20:11.722 ], 00:20:11.722 "driver_specific": { 00:20:11.722 "passthru": { 00:20:11.722 "name": "pt2", 00:20:11.722 "base_bdev_name": "malloc2" 00:20:11.722 } 00:20:11.722 } 00:20:11.722 }' 00:20:11.722 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.722 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.982 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.242 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.242 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.242 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:12.242 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.242 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.242 "name": "pt3", 00:20:12.242 "aliases": [ 00:20:12.242 "00000000-0000-0000-0000-000000000003" 00:20:12.242 ], 00:20:12.242 "product_name": "passthru", 00:20:12.242 "block_size": 512, 00:20:12.242 "num_blocks": 65536, 00:20:12.242 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:12.242 "assigned_rate_limits": { 00:20:12.242 "rw_ios_per_sec": 0, 00:20:12.242 "rw_mbytes_per_sec": 0, 00:20:12.242 "r_mbytes_per_sec": 0, 00:20:12.242 "w_mbytes_per_sec": 0 00:20:12.242 }, 00:20:12.242 "claimed": true, 00:20:12.242 "claim_type": "exclusive_write", 00:20:12.242 "zoned": false, 00:20:12.242 "supported_io_types": { 00:20:12.242 "read": true, 00:20:12.242 "write": true, 00:20:12.242 "unmap": true, 00:20:12.242 "write_zeroes": true, 00:20:12.242 "flush": true, 00:20:12.242 "reset": true, 00:20:12.242 "compare": false, 00:20:12.242 "compare_and_write": false, 00:20:12.242 "abort": true, 00:20:12.242 "nvme_admin": false, 00:20:12.242 "nvme_io": false 00:20:12.242 }, 00:20:12.242 "memory_domains": [ 00:20:12.242 { 00:20:12.242 "dma_device_id": "system", 00:20:12.242 "dma_device_type": 1 00:20:12.242 }, 00:20:12.242 { 00:20:12.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.242 "dma_device_type": 2 00:20:12.242 } 00:20:12.242 ], 00:20:12.242 "driver_specific": { 00:20:12.242 "passthru": { 00:20:12.242 "name": "pt3", 00:20:12.242 "base_bdev_name": "malloc3" 00:20:12.242 } 00:20:12.242 } 00:20:12.242 }' 00:20:12.242 19:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.501 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.761 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.761 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.761 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.761 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:12.761 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:13.020 "name": "pt4", 00:20:13.020 "aliases": [ 00:20:13.020 "00000000-0000-0000-0000-000000000004" 00:20:13.020 ], 00:20:13.020 "product_name": "passthru", 00:20:13.020 "block_size": 512, 00:20:13.020 "num_blocks": 65536, 00:20:13.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:13.020 "assigned_rate_limits": { 00:20:13.020 "rw_ios_per_sec": 0, 00:20:13.020 "rw_mbytes_per_sec": 0, 00:20:13.020 "r_mbytes_per_sec": 0, 00:20:13.020 "w_mbytes_per_sec": 0 00:20:13.020 }, 00:20:13.020 "claimed": true, 00:20:13.020 "claim_type": "exclusive_write", 00:20:13.020 "zoned": false, 00:20:13.020 "supported_io_types": { 00:20:13.020 "read": true, 00:20:13.020 "write": true, 00:20:13.020 "unmap": true, 00:20:13.020 "write_zeroes": true, 00:20:13.020 "flush": true, 00:20:13.020 "reset": true, 00:20:13.020 "compare": false, 00:20:13.020 "compare_and_write": false, 00:20:13.020 "abort": true, 00:20:13.020 "nvme_admin": false, 00:20:13.020 "nvme_io": false 00:20:13.020 }, 00:20:13.020 "memory_domains": [ 00:20:13.020 { 00:20:13.020 "dma_device_id": "system", 00:20:13.020 "dma_device_type": 1 00:20:13.020 }, 00:20:13.020 { 00:20:13.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.020 "dma_device_type": 2 00:20:13.020 } 00:20:13.020 ], 00:20:13.020 "driver_specific": { 00:20:13.020 "passthru": { 00:20:13.020 "name": "pt4", 00:20:13.020 "base_bdev_name": "malloc4" 00:20:13.020 } 00:20:13.020 } 00:20:13.020 }' 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.020 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.279 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:13.279 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.279 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.279 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:13.279 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:13.279 19:04:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:13.539 [2024-06-10 19:04:28.077871] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 5b53af16-dc60-422f-bf1f-c79bf1f9acc7 '!=' 5b53af16-dc60-422f-bf1f-c79bf1f9acc7 ']' 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1702827 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1702827 ']' 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1702827 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1702827 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1702827' 00:20:13.539 killing process with pid 1702827 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1702827 00:20:13.539 [2024-06-10 19:04:28.158416] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:13.539 [2024-06-10 19:04:28.158477] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.539 [2024-06-10 19:04:28.158531] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.539 [2024-06-10 19:04:28.158541] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c79ba0 name raid_bdev1, state offline 00:20:13.539 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1702827 00:20:13.539 [2024-06-10 19:04:28.190681] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.799 19:04:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:13.799 00:20:13.799 real 0m15.420s 00:20:13.799 user 0m27.800s 00:20:13.799 sys 0m2.780s 00:20:13.799 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:13.799 19:04:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.799 ************************************ 00:20:13.799 END TEST raid_superblock_test 00:20:13.799 ************************************ 00:20:13.799 19:04:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:20:13.799 19:04:28 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:13.799 19:04:28 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:13.799 19:04:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.799 ************************************ 00:20:13.799 START TEST raid_read_error_test 00:20:13.799 ************************************ 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 4 read 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Dp8HISpYQl 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1705820 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1705820 /var/tmp/spdk-raid.sock 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1705820 ']' 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:13.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:13.799 19:04:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.799 [2024-06-10 19:04:28.545281] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:20:13.799 [2024-06-10 19:04:28.545337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705820 ] 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.0 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.1 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.2 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.3 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.4 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.5 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.6 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:01.7 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.0 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.1 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.2 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.3 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.4 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.5 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.6 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b6:02.7 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.0 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.1 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.2 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.3 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.4 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.5 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.6 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:01.7 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.0 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.1 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.2 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.3 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.4 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.5 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.6 cannot be used 00:20:14.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:14.059 EAL: Requested device 0000:b8:02.7 cannot be used 00:20:14.059 [2024-06-10 19:04:28.676641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.059 [2024-06-10 19:04:28.762950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.319 [2024-06-10 19:04:28.826806] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.319 [2024-06-10 19:04:28.826839] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.887 19:04:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:14.887 19:04:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:20:14.887 19:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:14.887 19:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:15.147 BaseBdev1_malloc 00:20:15.147 19:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:15.147 true 00:20:15.406 19:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:15.406 [2024-06-10 19:04:30.112067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:15.406 [2024-06-10 19:04:30.112111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.406 [2024-06-10 19:04:30.112132] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2979d50 00:20:15.406 [2024-06-10 19:04:30.112144] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.406 [2024-06-10 19:04:30.113810] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.406 [2024-06-10 19:04:30.113844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.406 BaseBdev1 00:20:15.407 19:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:15.407 19:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:15.666 BaseBdev2_malloc 00:20:15.666 19:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:15.925 true 00:20:15.925 19:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:16.185 [2024-06-10 19:04:30.790214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:16.185 [2024-06-10 19:04:30.790251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.185 [2024-06-10 19:04:30.790270] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x297f2e0 00:20:16.185 [2024-06-10 19:04:30.790282] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.185 [2024-06-10 19:04:30.791660] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.185 [2024-06-10 19:04:30.791687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:16.185 BaseBdev2 00:20:16.185 19:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:16.185 19:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:16.444 BaseBdev3_malloc 00:20:16.444 19:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:16.704 true 00:20:16.704 19:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:16.704 [2024-06-10 19:04:31.444149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:16.704 [2024-06-10 19:04:31.444188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.704 [2024-06-10 19:04:31.444208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2980fd0 00:20:16.704 [2024-06-10 19:04:31.444219] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.704 [2024-06-10 19:04:31.445590] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.704 [2024-06-10 19:04:31.445617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:16.704 BaseBdev3 00:20:16.963 19:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:16.963 19:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:16.963 BaseBdev4_malloc 00:20:16.963 19:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:17.222 true 00:20:17.222 19:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:17.482 [2024-06-10 19:04:32.102209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:17.482 [2024-06-10 19:04:32.102247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.482 [2024-06-10 19:04:32.102267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2982830 00:20:17.482 [2024-06-10 19:04:32.102283] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.482 [2024-06-10 19:04:32.103621] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.482 [2024-06-10 19:04:32.103647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:17.482 BaseBdev4 00:20:17.482 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:17.741 [2024-06-10 19:04:32.326827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.741 [2024-06-10 19:04:32.327906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.741 [2024-06-10 19:04:32.327968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.741 [2024-06-10 19:04:32.328025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:17.741 [2024-06-10 19:04:32.328233] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x29833a0 00:20:17.741 [2024-06-10 19:04:32.328244] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:17.741 [2024-06-10 19:04:32.328403] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2984ac0 00:20:17.741 [2024-06-10 19:04:32.328532] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x29833a0 00:20:17.741 [2024-06-10 19:04:32.328542] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x29833a0 00:20:17.741 [2024-06-10 19:04:32.328639] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.741 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.000 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.000 "name": "raid_bdev1", 00:20:18.000 "uuid": "f0a03bc0-a367-4c84-976a-f7ce14358940", 00:20:18.000 "strip_size_kb": 64, 00:20:18.000 "state": "online", 00:20:18.000 "raid_level": "raid0", 00:20:18.000 "superblock": true, 00:20:18.000 "num_base_bdevs": 4, 00:20:18.000 "num_base_bdevs_discovered": 4, 00:20:18.000 "num_base_bdevs_operational": 4, 00:20:18.000 "base_bdevs_list": [ 00:20:18.000 { 00:20:18.000 "name": "BaseBdev1", 00:20:18.000 "uuid": "3946103e-61d6-5f25-b3f0-0ba4eb530b6c", 00:20:18.000 "is_configured": true, 00:20:18.000 "data_offset": 2048, 00:20:18.000 "data_size": 63488 00:20:18.000 }, 00:20:18.000 { 00:20:18.000 "name": "BaseBdev2", 00:20:18.000 "uuid": "7e574c5c-6958-54c2-a462-110fae9840c2", 00:20:18.000 "is_configured": true, 00:20:18.000 "data_offset": 2048, 00:20:18.000 "data_size": 63488 00:20:18.000 }, 00:20:18.000 { 00:20:18.000 "name": "BaseBdev3", 00:20:18.000 "uuid": "61315919-d969-5386-bfc2-78411562773d", 00:20:18.000 "is_configured": true, 00:20:18.000 "data_offset": 2048, 00:20:18.000 "data_size": 63488 00:20:18.000 }, 00:20:18.000 { 00:20:18.000 "name": "BaseBdev4", 00:20:18.000 "uuid": "09e2d0cc-05ab-5ccb-9fa3-d65722d0d8cb", 00:20:18.000 "is_configured": true, 00:20:18.000 "data_offset": 2048, 00:20:18.000 "data_size": 63488 00:20:18.000 } 00:20:18.000 ] 00:20:18.000 }' 00:20:18.000 19:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.000 19:04:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.570 19:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:18.570 19:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:18.570 [2024-06-10 19:04:33.261492] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2984940 00:20:19.508 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.768 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.027 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.027 "name": "raid_bdev1", 00:20:20.027 "uuid": "f0a03bc0-a367-4c84-976a-f7ce14358940", 00:20:20.027 "strip_size_kb": 64, 00:20:20.027 "state": "online", 00:20:20.027 "raid_level": "raid0", 00:20:20.027 "superblock": true, 00:20:20.027 "num_base_bdevs": 4, 00:20:20.027 "num_base_bdevs_discovered": 4, 00:20:20.027 "num_base_bdevs_operational": 4, 00:20:20.027 "base_bdevs_list": [ 00:20:20.027 { 00:20:20.027 "name": "BaseBdev1", 00:20:20.027 "uuid": "3946103e-61d6-5f25-b3f0-0ba4eb530b6c", 00:20:20.027 "is_configured": true, 00:20:20.027 "data_offset": 2048, 00:20:20.027 "data_size": 63488 00:20:20.027 }, 00:20:20.027 { 00:20:20.027 "name": "BaseBdev2", 00:20:20.027 "uuid": "7e574c5c-6958-54c2-a462-110fae9840c2", 00:20:20.027 "is_configured": true, 00:20:20.027 "data_offset": 2048, 00:20:20.027 "data_size": 63488 00:20:20.027 }, 00:20:20.027 { 00:20:20.027 "name": "BaseBdev3", 00:20:20.027 "uuid": "61315919-d969-5386-bfc2-78411562773d", 00:20:20.027 "is_configured": true, 00:20:20.027 "data_offset": 2048, 00:20:20.027 "data_size": 63488 00:20:20.027 }, 00:20:20.027 { 00:20:20.027 "name": "BaseBdev4", 00:20:20.027 "uuid": "09e2d0cc-05ab-5ccb-9fa3-d65722d0d8cb", 00:20:20.027 "is_configured": true, 00:20:20.027 "data_offset": 2048, 00:20:20.027 "data_size": 63488 00:20:20.027 } 00:20:20.027 ] 00:20:20.027 }' 00:20:20.027 19:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.027 19:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.596 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:20.856 [2024-06-10 19:04:35.395614] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:20.856 [2024-06-10 19:04:35.395646] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.856 [2024-06-10 19:04:35.398545] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.856 [2024-06-10 19:04:35.398592] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.856 [2024-06-10 19:04:35.398628] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.856 [2024-06-10 19:04:35.398638] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x29833a0 name raid_bdev1, state offline 00:20:20.856 0 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1705820 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1705820 ']' 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1705820 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1705820 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1705820' 00:20:20.856 killing process with pid 1705820 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1705820 00:20:20.856 [2024-06-10 19:04:35.472513] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.856 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1705820 00:20:20.856 [2024-06-10 19:04:35.498848] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Dp8HISpYQl 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:21.115 00:20:21.115 real 0m7.234s 00:20:21.115 user 0m11.517s 00:20:21.115 sys 0m1.265s 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:21.115 19:04:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.115 ************************************ 00:20:21.115 END TEST raid_read_error_test 00:20:21.115 ************************************ 00:20:21.115 19:04:35 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:20:21.115 19:04:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:21.115 19:04:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:21.115 19:04:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.115 ************************************ 00:20:21.115 START TEST raid_write_error_test 00:20:21.115 ************************************ 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 4 write 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:21.115 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.oXUVVVgNFT 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1707048 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1707048 /var/tmp/spdk-raid.sock 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1707048 ']' 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:21.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:21.116 19:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.116 [2024-06-10 19:04:35.869761] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:20:21.116 [2024-06-10 19:04:35.869823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707048 ] 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.0 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.1 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.2 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.3 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.4 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.5 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.6 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:01.7 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.0 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.1 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.2 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.3 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.4 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.5 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.6 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b6:02.7 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b8:01.0 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b8:01.1 cannot be used 00:20:21.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.375 EAL: Requested device 0000:b8:01.2 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:01.3 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:01.4 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:01.5 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:01.6 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:01.7 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.0 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.1 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.2 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.3 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.4 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.5 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.6 cannot be used 00:20:21.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.376 EAL: Requested device 0000:b8:02.7 cannot be used 00:20:21.376 [2024-06-10 19:04:36.004070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.376 [2024-06-10 19:04:36.090687] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.635 [2024-06-10 19:04:36.155802] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.635 [2024-06-10 19:04:36.155843] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.204 19:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:22.204 19:04:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:20:22.204 19:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:22.204 19:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.463 BaseBdev1_malloc 00:20:22.463 19:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:22.463 true 00:20:22.463 19:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:22.722 [2024-06-10 19:04:37.410374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:22.723 [2024-06-10 19:04:37.410412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.723 [2024-06-10 19:04:37.410429] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x235bd50 00:20:22.723 [2024-06-10 19:04:37.410441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.723 [2024-06-10 19:04:37.411969] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.723 [2024-06-10 19:04:37.411996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.723 BaseBdev1 00:20:22.723 19:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:22.723 19:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.982 BaseBdev2_malloc 00:20:22.982 19:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:23.241 true 00:20:23.241 19:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:23.500 [2024-06-10 19:04:38.084366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:23.500 [2024-06-10 19:04:38.084404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.500 [2024-06-10 19:04:38.084421] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23612e0 00:20:23.500 [2024-06-10 19:04:38.084433] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.500 [2024-06-10 19:04:38.085765] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.500 [2024-06-10 19:04:38.085790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:23.500 BaseBdev2 00:20:23.500 19:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:23.500 19:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:23.759 BaseBdev3_malloc 00:20:23.759 19:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:24.018 true 00:20:24.018 19:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:24.018 [2024-06-10 19:04:38.754467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:24.018 [2024-06-10 19:04:38.754504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.018 [2024-06-10 19:04:38.754520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2362fd0 00:20:24.018 [2024-06-10 19:04:38.754532] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.018 [2024-06-10 19:04:38.755840] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.018 [2024-06-10 19:04:38.755866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:24.018 BaseBdev3 00:20:24.018 19:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:24.018 19:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:24.278 BaseBdev4_malloc 00:20:24.278 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:24.537 true 00:20:24.537 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:24.809 [2024-06-10 19:04:39.444681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:24.809 [2024-06-10 19:04:39.444721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.809 [2024-06-10 19:04:39.444738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2364830 00:20:24.809 [2024-06-10 19:04:39.444749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.809 [2024-06-10 19:04:39.446107] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.809 [2024-06-10 19:04:39.446133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:24.809 BaseBdev4 00:20:24.810 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:25.133 [2024-06-10 19:04:39.669302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.133 [2024-06-10 19:04:39.670507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.133 [2024-06-10 19:04:39.670571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.133 [2024-06-10 19:04:39.670634] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:25.133 [2024-06-10 19:04:39.670844] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23653a0 00:20:25.133 [2024-06-10 19:04:39.670854] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:25.133 [2024-06-10 19:04:39.671029] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2366ac0 00:20:25.133 [2024-06-10 19:04:39.671164] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23653a0 00:20:25.133 [2024-06-10 19:04:39.671172] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23653a0 00:20:25.133 [2024-06-10 19:04:39.671264] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.133 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.427 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.427 "name": "raid_bdev1", 00:20:25.427 "uuid": "fe4db390-205c-4dea-92a9-0dc13ca6340c", 00:20:25.427 "strip_size_kb": 64, 00:20:25.427 "state": "online", 00:20:25.427 "raid_level": "raid0", 00:20:25.427 "superblock": true, 00:20:25.427 "num_base_bdevs": 4, 00:20:25.427 "num_base_bdevs_discovered": 4, 00:20:25.427 "num_base_bdevs_operational": 4, 00:20:25.427 "base_bdevs_list": [ 00:20:25.427 { 00:20:25.427 "name": "BaseBdev1", 00:20:25.427 "uuid": "2fe3f516-ff1a-5cd5-bad3-2bd57336bf94", 00:20:25.427 "is_configured": true, 00:20:25.427 "data_offset": 2048, 00:20:25.427 "data_size": 63488 00:20:25.427 }, 00:20:25.427 { 00:20:25.427 "name": "BaseBdev2", 00:20:25.427 "uuid": "ad220555-9fb7-5813-b7a2-cb3f4a2b9908", 00:20:25.427 "is_configured": true, 00:20:25.427 "data_offset": 2048, 00:20:25.427 "data_size": 63488 00:20:25.427 }, 00:20:25.427 { 00:20:25.427 "name": "BaseBdev3", 00:20:25.427 "uuid": "61f37048-383c-5e79-98d2-a3feebf57148", 00:20:25.427 "is_configured": true, 00:20:25.427 "data_offset": 2048, 00:20:25.427 "data_size": 63488 00:20:25.427 }, 00:20:25.427 { 00:20:25.427 "name": "BaseBdev4", 00:20:25.427 "uuid": "1263e7ee-9a46-5bf6-b611-0c3b6bfcb9ff", 00:20:25.427 "is_configured": true, 00:20:25.427 "data_offset": 2048, 00:20:25.427 "data_size": 63488 00:20:25.427 } 00:20:25.427 ] 00:20:25.427 }' 00:20:25.427 19:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.427 19:04:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.996 19:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:25.996 19:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:25.996 [2024-06-10 19:04:40.563865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2366940 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.934 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.935 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.935 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.194 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.194 "name": "raid_bdev1", 00:20:27.194 "uuid": "fe4db390-205c-4dea-92a9-0dc13ca6340c", 00:20:27.194 "strip_size_kb": 64, 00:20:27.194 "state": "online", 00:20:27.194 "raid_level": "raid0", 00:20:27.194 "superblock": true, 00:20:27.194 "num_base_bdevs": 4, 00:20:27.194 "num_base_bdevs_discovered": 4, 00:20:27.194 "num_base_bdevs_operational": 4, 00:20:27.194 "base_bdevs_list": [ 00:20:27.194 { 00:20:27.194 "name": "BaseBdev1", 00:20:27.194 "uuid": "2fe3f516-ff1a-5cd5-bad3-2bd57336bf94", 00:20:27.194 "is_configured": true, 00:20:27.194 "data_offset": 2048, 00:20:27.194 "data_size": 63488 00:20:27.194 }, 00:20:27.194 { 00:20:27.194 "name": "BaseBdev2", 00:20:27.194 "uuid": "ad220555-9fb7-5813-b7a2-cb3f4a2b9908", 00:20:27.194 "is_configured": true, 00:20:27.194 "data_offset": 2048, 00:20:27.194 "data_size": 63488 00:20:27.194 }, 00:20:27.194 { 00:20:27.194 "name": "BaseBdev3", 00:20:27.194 "uuid": "61f37048-383c-5e79-98d2-a3feebf57148", 00:20:27.194 "is_configured": true, 00:20:27.194 "data_offset": 2048, 00:20:27.194 "data_size": 63488 00:20:27.194 }, 00:20:27.194 { 00:20:27.194 "name": "BaseBdev4", 00:20:27.194 "uuid": "1263e7ee-9a46-5bf6-b611-0c3b6bfcb9ff", 00:20:27.194 "is_configured": true, 00:20:27.194 "data_offset": 2048, 00:20:27.194 "data_size": 63488 00:20:27.194 } 00:20:27.194 ] 00:20:27.194 }' 00:20:27.194 19:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.194 19:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.763 19:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:28.022 [2024-06-10 19:04:42.690289] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.022 [2024-06-10 19:04:42.690321] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.022 [2024-06-10 19:04:42.693233] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.022 [2024-06-10 19:04:42.693266] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.022 [2024-06-10 19:04:42.693301] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.022 [2024-06-10 19:04:42.693311] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23653a0 name raid_bdev1, state offline 00:20:28.022 0 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1707048 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1707048 ']' 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1707048 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1707048 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1707048' 00:20:28.022 killing process with pid 1707048 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1707048 00:20:28.022 [2024-06-10 19:04:42.768811] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.022 19:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1707048 00:20:28.282 [2024-06-10 19:04:42.795815] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.282 19:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.oXUVVVgNFT 00:20:28.282 19:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:28.282 19:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:28.282 00:20:28.282 real 0m7.211s 00:20:28.282 user 0m11.420s 00:20:28.282 sys 0m1.265s 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:28.282 19:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.282 ************************************ 00:20:28.282 END TEST raid_write_error_test 00:20:28.282 ************************************ 00:20:28.542 19:04:43 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:28.542 19:04:43 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:20:28.542 19:04:43 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:28.542 19:04:43 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:28.542 19:04:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.542 ************************************ 00:20:28.542 START TEST raid_state_function_test 00:20:28.542 ************************************ 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 4 false 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1708442 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1708442' 00:20:28.542 Process raid pid: 1708442 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1708442 /var/tmp/spdk-raid.sock 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1708442 ']' 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:28.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:28.542 19:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.542 [2024-06-10 19:04:43.159350] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:20:28.542 [2024-06-10 19:04:43.159408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.0 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.1 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.2 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.3 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.4 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.5 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.6 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:01.7 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.0 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.1 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.2 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.3 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.4 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.5 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.6 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b6:02.7 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b8:01.0 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b8:01.1 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.542 EAL: Requested device 0000:b8:01.2 cannot be used 00:20:28.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:01.3 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:01.4 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:01.5 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:01.6 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:01.7 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.0 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.1 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.2 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.3 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.4 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.5 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.6 cannot be used 00:20:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:28.543 EAL: Requested device 0000:b8:02.7 cannot be used 00:20:28.543 [2024-06-10 19:04:43.292592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.802 [2024-06-10 19:04:43.379432] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.802 [2024-06-10 19:04:43.443478] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.802 [2024-06-10 19:04:43.443509] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.369 19:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:29.369 19:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:20:29.369 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:29.629 [2024-06-10 19:04:44.261348] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.629 [2024-06-10 19:04:44.261388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.629 [2024-06-10 19:04:44.261398] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.629 [2024-06-10 19:04:44.261409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.629 [2024-06-10 19:04:44.261417] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:29.629 [2024-06-10 19:04:44.261427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:29.629 [2024-06-10 19:04:44.261435] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:29.629 [2024-06-10 19:04:44.261453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.629 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.889 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.889 "name": "Existed_Raid", 00:20:29.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.889 "strip_size_kb": 64, 00:20:29.889 "state": "configuring", 00:20:29.889 "raid_level": "concat", 00:20:29.889 "superblock": false, 00:20:29.889 "num_base_bdevs": 4, 00:20:29.889 "num_base_bdevs_discovered": 0, 00:20:29.889 "num_base_bdevs_operational": 4, 00:20:29.889 "base_bdevs_list": [ 00:20:29.889 { 00:20:29.889 "name": "BaseBdev1", 00:20:29.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.889 "is_configured": false, 00:20:29.889 "data_offset": 0, 00:20:29.889 "data_size": 0 00:20:29.889 }, 00:20:29.889 { 00:20:29.889 "name": "BaseBdev2", 00:20:29.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.889 "is_configured": false, 00:20:29.889 "data_offset": 0, 00:20:29.889 "data_size": 0 00:20:29.889 }, 00:20:29.889 { 00:20:29.889 "name": "BaseBdev3", 00:20:29.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.889 "is_configured": false, 00:20:29.889 "data_offset": 0, 00:20:29.889 "data_size": 0 00:20:29.889 }, 00:20:29.889 { 00:20:29.889 "name": "BaseBdev4", 00:20:29.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.889 "is_configured": false, 00:20:29.889 "data_offset": 0, 00:20:29.889 "data_size": 0 00:20:29.889 } 00:20:29.889 ] 00:20:29.889 }' 00:20:29.889 19:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.889 19:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.457 19:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:30.778 [2024-06-10 19:04:45.279909] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:30.779 [2024-06-10 19:04:45.279936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d75f50 name Existed_Raid, state configuring 00:20:30.779 19:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:30.779 [2024-06-10 19:04:45.508519] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:30.779 [2024-06-10 19:04:45.508549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:30.779 [2024-06-10 19:04:45.508557] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.779 [2024-06-10 19:04:45.508568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.779 [2024-06-10 19:04:45.508585] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.779 [2024-06-10 19:04:45.508595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.779 [2024-06-10 19:04:45.508604] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:30.779 [2024-06-10 19:04:45.508618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:30.779 19:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:31.038 [2024-06-10 19:04:45.746517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.038 BaseBdev1 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:31.038 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.297 19:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:31.556 [ 00:20:31.556 { 00:20:31.556 "name": "BaseBdev1", 00:20:31.556 "aliases": [ 00:20:31.556 "df693274-a493-48cd-bd20-9fd0797d6861" 00:20:31.556 ], 00:20:31.556 "product_name": "Malloc disk", 00:20:31.556 "block_size": 512, 00:20:31.556 "num_blocks": 65536, 00:20:31.556 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:31.556 "assigned_rate_limits": { 00:20:31.556 "rw_ios_per_sec": 0, 00:20:31.556 "rw_mbytes_per_sec": 0, 00:20:31.556 "r_mbytes_per_sec": 0, 00:20:31.556 "w_mbytes_per_sec": 0 00:20:31.556 }, 00:20:31.556 "claimed": true, 00:20:31.556 "claim_type": "exclusive_write", 00:20:31.556 "zoned": false, 00:20:31.556 "supported_io_types": { 00:20:31.556 "read": true, 00:20:31.556 "write": true, 00:20:31.556 "unmap": true, 00:20:31.556 "write_zeroes": true, 00:20:31.556 "flush": true, 00:20:31.556 "reset": true, 00:20:31.556 "compare": false, 00:20:31.556 "compare_and_write": false, 00:20:31.556 "abort": true, 00:20:31.556 "nvme_admin": false, 00:20:31.556 "nvme_io": false 00:20:31.556 }, 00:20:31.556 "memory_domains": [ 00:20:31.556 { 00:20:31.556 "dma_device_id": "system", 00:20:31.556 "dma_device_type": 1 00:20:31.556 }, 00:20:31.556 { 00:20:31.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.556 "dma_device_type": 2 00:20:31.556 } 00:20:31.556 ], 00:20:31.556 "driver_specific": {} 00:20:31.556 } 00:20:31.556 ] 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.556 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.815 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.815 "name": "Existed_Raid", 00:20:31.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.815 "strip_size_kb": 64, 00:20:31.815 "state": "configuring", 00:20:31.815 "raid_level": "concat", 00:20:31.815 "superblock": false, 00:20:31.815 "num_base_bdevs": 4, 00:20:31.815 "num_base_bdevs_discovered": 1, 00:20:31.815 "num_base_bdevs_operational": 4, 00:20:31.816 "base_bdevs_list": [ 00:20:31.816 { 00:20:31.816 "name": "BaseBdev1", 00:20:31.816 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:31.816 "is_configured": true, 00:20:31.816 "data_offset": 0, 00:20:31.816 "data_size": 65536 00:20:31.816 }, 00:20:31.816 { 00:20:31.816 "name": "BaseBdev2", 00:20:31.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.816 "is_configured": false, 00:20:31.816 "data_offset": 0, 00:20:31.816 "data_size": 0 00:20:31.816 }, 00:20:31.816 { 00:20:31.816 "name": "BaseBdev3", 00:20:31.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.816 "is_configured": false, 00:20:31.816 "data_offset": 0, 00:20:31.816 "data_size": 0 00:20:31.816 }, 00:20:31.816 { 00:20:31.816 "name": "BaseBdev4", 00:20:31.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.816 "is_configured": false, 00:20:31.816 "data_offset": 0, 00:20:31.816 "data_size": 0 00:20:31.816 } 00:20:31.816 ] 00:20:31.816 }' 00:20:31.816 19:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.816 19:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.384 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:32.643 [2024-06-10 19:04:47.214377] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.643 [2024-06-10 19:04:47.214409] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d757c0 name Existed_Raid, state configuring 00:20:32.643 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:32.903 [2024-06-10 19:04:47.443009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.903 [2024-06-10 19:04:47.444343] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.903 [2024-06-10 19:04:47.444373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.903 [2024-06-10 19:04:47.444383] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.903 [2024-06-10 19:04:47.444398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.903 [2024-06-10 19:04:47.444407] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:32.903 [2024-06-10 19:04:47.444418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.903 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.162 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.162 "name": "Existed_Raid", 00:20:33.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.162 "strip_size_kb": 64, 00:20:33.162 "state": "configuring", 00:20:33.162 "raid_level": "concat", 00:20:33.162 "superblock": false, 00:20:33.162 "num_base_bdevs": 4, 00:20:33.162 "num_base_bdevs_discovered": 1, 00:20:33.162 "num_base_bdevs_operational": 4, 00:20:33.162 "base_bdevs_list": [ 00:20:33.162 { 00:20:33.162 "name": "BaseBdev1", 00:20:33.162 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:33.162 "is_configured": true, 00:20:33.162 "data_offset": 0, 00:20:33.162 "data_size": 65536 00:20:33.162 }, 00:20:33.162 { 00:20:33.162 "name": "BaseBdev2", 00:20:33.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.162 "is_configured": false, 00:20:33.162 "data_offset": 0, 00:20:33.162 "data_size": 0 00:20:33.162 }, 00:20:33.162 { 00:20:33.162 "name": "BaseBdev3", 00:20:33.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.162 "is_configured": false, 00:20:33.162 "data_offset": 0, 00:20:33.162 "data_size": 0 00:20:33.162 }, 00:20:33.162 { 00:20:33.162 "name": "BaseBdev4", 00:20:33.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.162 "is_configured": false, 00:20:33.162 "data_offset": 0, 00:20:33.162 "data_size": 0 00:20:33.162 } 00:20:33.162 ] 00:20:33.162 }' 00:20:33.162 19:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.162 19:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.729 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:33.729 [2024-06-10 19:04:48.468922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:33.729 BaseBdev2 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.987 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:34.246 [ 00:20:34.246 { 00:20:34.246 "name": "BaseBdev2", 00:20:34.246 "aliases": [ 00:20:34.246 "879bd952-ec8a-4550-863d-921674d3e1e0" 00:20:34.246 ], 00:20:34.246 "product_name": "Malloc disk", 00:20:34.246 "block_size": 512, 00:20:34.246 "num_blocks": 65536, 00:20:34.246 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:34.246 "assigned_rate_limits": { 00:20:34.246 "rw_ios_per_sec": 0, 00:20:34.246 "rw_mbytes_per_sec": 0, 00:20:34.246 "r_mbytes_per_sec": 0, 00:20:34.246 "w_mbytes_per_sec": 0 00:20:34.246 }, 00:20:34.246 "claimed": true, 00:20:34.246 "claim_type": "exclusive_write", 00:20:34.246 "zoned": false, 00:20:34.246 "supported_io_types": { 00:20:34.246 "read": true, 00:20:34.246 "write": true, 00:20:34.246 "unmap": true, 00:20:34.246 "write_zeroes": true, 00:20:34.246 "flush": true, 00:20:34.246 "reset": true, 00:20:34.246 "compare": false, 00:20:34.246 "compare_and_write": false, 00:20:34.246 "abort": true, 00:20:34.246 "nvme_admin": false, 00:20:34.246 "nvme_io": false 00:20:34.246 }, 00:20:34.246 "memory_domains": [ 00:20:34.246 { 00:20:34.246 "dma_device_id": "system", 00:20:34.246 "dma_device_type": 1 00:20:34.246 }, 00:20:34.246 { 00:20:34.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.246 "dma_device_type": 2 00:20:34.246 } 00:20:34.246 ], 00:20:34.246 "driver_specific": {} 00:20:34.246 } 00:20:34.246 ] 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.246 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.247 19:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.506 19:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.506 "name": "Existed_Raid", 00:20:34.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.506 "strip_size_kb": 64, 00:20:34.506 "state": "configuring", 00:20:34.506 "raid_level": "concat", 00:20:34.506 "superblock": false, 00:20:34.506 "num_base_bdevs": 4, 00:20:34.506 "num_base_bdevs_discovered": 2, 00:20:34.506 "num_base_bdevs_operational": 4, 00:20:34.506 "base_bdevs_list": [ 00:20:34.506 { 00:20:34.506 "name": "BaseBdev1", 00:20:34.506 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:34.506 "is_configured": true, 00:20:34.506 "data_offset": 0, 00:20:34.506 "data_size": 65536 00:20:34.506 }, 00:20:34.506 { 00:20:34.506 "name": "BaseBdev2", 00:20:34.506 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:34.506 "is_configured": true, 00:20:34.506 "data_offset": 0, 00:20:34.506 "data_size": 65536 00:20:34.506 }, 00:20:34.506 { 00:20:34.506 "name": "BaseBdev3", 00:20:34.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.506 "is_configured": false, 00:20:34.506 "data_offset": 0, 00:20:34.506 "data_size": 0 00:20:34.506 }, 00:20:34.506 { 00:20:34.506 "name": "BaseBdev4", 00:20:34.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.506 "is_configured": false, 00:20:34.506 "data_offset": 0, 00:20:34.506 "data_size": 0 00:20:34.506 } 00:20:34.506 ] 00:20:34.506 }' 00:20:34.506 19:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.506 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.074 19:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:35.333 [2024-06-10 19:04:49.967976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:35.333 BaseBdev3 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:35.333 19:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.592 19:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:35.851 [ 00:20:35.851 { 00:20:35.851 "name": "BaseBdev3", 00:20:35.851 "aliases": [ 00:20:35.851 "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85" 00:20:35.851 ], 00:20:35.851 "product_name": "Malloc disk", 00:20:35.851 "block_size": 512, 00:20:35.851 "num_blocks": 65536, 00:20:35.851 "uuid": "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85", 00:20:35.851 "assigned_rate_limits": { 00:20:35.851 "rw_ios_per_sec": 0, 00:20:35.851 "rw_mbytes_per_sec": 0, 00:20:35.851 "r_mbytes_per_sec": 0, 00:20:35.851 "w_mbytes_per_sec": 0 00:20:35.851 }, 00:20:35.851 "claimed": true, 00:20:35.851 "claim_type": "exclusive_write", 00:20:35.851 "zoned": false, 00:20:35.851 "supported_io_types": { 00:20:35.851 "read": true, 00:20:35.851 "write": true, 00:20:35.851 "unmap": true, 00:20:35.851 "write_zeroes": true, 00:20:35.851 "flush": true, 00:20:35.851 "reset": true, 00:20:35.851 "compare": false, 00:20:35.851 "compare_and_write": false, 00:20:35.851 "abort": true, 00:20:35.851 "nvme_admin": false, 00:20:35.851 "nvme_io": false 00:20:35.851 }, 00:20:35.851 "memory_domains": [ 00:20:35.851 { 00:20:35.851 "dma_device_id": "system", 00:20:35.851 "dma_device_type": 1 00:20:35.851 }, 00:20:35.851 { 00:20:35.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.851 "dma_device_type": 2 00:20:35.851 } 00:20:35.851 ], 00:20:35.851 "driver_specific": {} 00:20:35.851 } 00:20:35.851 ] 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.851 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.110 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.110 "name": "Existed_Raid", 00:20:36.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.110 "strip_size_kb": 64, 00:20:36.110 "state": "configuring", 00:20:36.110 "raid_level": "concat", 00:20:36.110 "superblock": false, 00:20:36.110 "num_base_bdevs": 4, 00:20:36.110 "num_base_bdevs_discovered": 3, 00:20:36.110 "num_base_bdevs_operational": 4, 00:20:36.110 "base_bdevs_list": [ 00:20:36.110 { 00:20:36.110 "name": "BaseBdev1", 00:20:36.110 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:36.110 "is_configured": true, 00:20:36.110 "data_offset": 0, 00:20:36.110 "data_size": 65536 00:20:36.110 }, 00:20:36.110 { 00:20:36.110 "name": "BaseBdev2", 00:20:36.110 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:36.110 "is_configured": true, 00:20:36.110 "data_offset": 0, 00:20:36.110 "data_size": 65536 00:20:36.110 }, 00:20:36.110 { 00:20:36.110 "name": "BaseBdev3", 00:20:36.110 "uuid": "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85", 00:20:36.110 "is_configured": true, 00:20:36.110 "data_offset": 0, 00:20:36.110 "data_size": 65536 00:20:36.110 }, 00:20:36.110 { 00:20:36.110 "name": "BaseBdev4", 00:20:36.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.110 "is_configured": false, 00:20:36.110 "data_offset": 0, 00:20:36.110 "data_size": 0 00:20:36.110 } 00:20:36.110 ] 00:20:36.110 }' 00:20:36.110 19:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.110 19:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.677 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:36.677 [2024-06-10 19:04:51.414992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:36.677 [2024-06-10 19:04:51.415023] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d76820 00:20:36.677 [2024-06-10 19:04:51.415031] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:36.677 [2024-06-10 19:04:51.415258] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d7e9a0 00:20:36.677 [2024-06-10 19:04:51.415368] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d76820 00:20:36.677 [2024-06-10 19:04:51.415377] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d76820 00:20:36.677 [2024-06-10 19:04:51.415524] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.677 BaseBdev4 00:20:36.677 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:36.678 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:20:36.936 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:36.936 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:36.936 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:36.936 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:36.936 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.936 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:37.195 [ 00:20:37.195 { 00:20:37.195 "name": "BaseBdev4", 00:20:37.195 "aliases": [ 00:20:37.195 "1fc0eb66-15cb-42ff-a5bb-26f5af801926" 00:20:37.195 ], 00:20:37.195 "product_name": "Malloc disk", 00:20:37.195 "block_size": 512, 00:20:37.195 "num_blocks": 65536, 00:20:37.195 "uuid": "1fc0eb66-15cb-42ff-a5bb-26f5af801926", 00:20:37.195 "assigned_rate_limits": { 00:20:37.195 "rw_ios_per_sec": 0, 00:20:37.195 "rw_mbytes_per_sec": 0, 00:20:37.195 "r_mbytes_per_sec": 0, 00:20:37.195 "w_mbytes_per_sec": 0 00:20:37.195 }, 00:20:37.195 "claimed": true, 00:20:37.195 "claim_type": "exclusive_write", 00:20:37.195 "zoned": false, 00:20:37.195 "supported_io_types": { 00:20:37.195 "read": true, 00:20:37.195 "write": true, 00:20:37.195 "unmap": true, 00:20:37.195 "write_zeroes": true, 00:20:37.195 "flush": true, 00:20:37.195 "reset": true, 00:20:37.195 "compare": false, 00:20:37.195 "compare_and_write": false, 00:20:37.195 "abort": true, 00:20:37.195 "nvme_admin": false, 00:20:37.195 "nvme_io": false 00:20:37.195 }, 00:20:37.195 "memory_domains": [ 00:20:37.195 { 00:20:37.195 "dma_device_id": "system", 00:20:37.195 "dma_device_type": 1 00:20:37.195 }, 00:20:37.195 { 00:20:37.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.195 "dma_device_type": 2 00:20:37.195 } 00:20:37.195 ], 00:20:37.195 "driver_specific": {} 00:20:37.195 } 00:20:37.195 ] 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.195 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.196 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.196 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.196 19:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.454 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.454 "name": "Existed_Raid", 00:20:37.454 "uuid": "8b89daf3-82a3-420e-a5e2-a5a7db657ad9", 00:20:37.454 "strip_size_kb": 64, 00:20:37.454 "state": "online", 00:20:37.454 "raid_level": "concat", 00:20:37.454 "superblock": false, 00:20:37.454 "num_base_bdevs": 4, 00:20:37.454 "num_base_bdevs_discovered": 4, 00:20:37.454 "num_base_bdevs_operational": 4, 00:20:37.454 "base_bdevs_list": [ 00:20:37.454 { 00:20:37.454 "name": "BaseBdev1", 00:20:37.454 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:37.454 "is_configured": true, 00:20:37.454 "data_offset": 0, 00:20:37.454 "data_size": 65536 00:20:37.454 }, 00:20:37.454 { 00:20:37.454 "name": "BaseBdev2", 00:20:37.454 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:37.454 "is_configured": true, 00:20:37.454 "data_offset": 0, 00:20:37.454 "data_size": 65536 00:20:37.454 }, 00:20:37.454 { 00:20:37.454 "name": "BaseBdev3", 00:20:37.454 "uuid": "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85", 00:20:37.454 "is_configured": true, 00:20:37.454 "data_offset": 0, 00:20:37.454 "data_size": 65536 00:20:37.454 }, 00:20:37.454 { 00:20:37.454 "name": "BaseBdev4", 00:20:37.454 "uuid": "1fc0eb66-15cb-42ff-a5bb-26f5af801926", 00:20:37.454 "is_configured": true, 00:20:37.454 "data_offset": 0, 00:20:37.454 "data_size": 65536 00:20:37.454 } 00:20:37.454 ] 00:20:37.454 }' 00:20:37.454 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.454 19:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:38.022 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:38.282 [2024-06-10 19:04:52.851042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.282 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:38.282 "name": "Existed_Raid", 00:20:38.282 "aliases": [ 00:20:38.282 "8b89daf3-82a3-420e-a5e2-a5a7db657ad9" 00:20:38.282 ], 00:20:38.282 "product_name": "Raid Volume", 00:20:38.282 "block_size": 512, 00:20:38.282 "num_blocks": 262144, 00:20:38.282 "uuid": "8b89daf3-82a3-420e-a5e2-a5a7db657ad9", 00:20:38.282 "assigned_rate_limits": { 00:20:38.282 "rw_ios_per_sec": 0, 00:20:38.282 "rw_mbytes_per_sec": 0, 00:20:38.282 "r_mbytes_per_sec": 0, 00:20:38.282 "w_mbytes_per_sec": 0 00:20:38.282 }, 00:20:38.282 "claimed": false, 00:20:38.282 "zoned": false, 00:20:38.282 "supported_io_types": { 00:20:38.282 "read": true, 00:20:38.282 "write": true, 00:20:38.282 "unmap": true, 00:20:38.282 "write_zeroes": true, 00:20:38.282 "flush": true, 00:20:38.282 "reset": true, 00:20:38.282 "compare": false, 00:20:38.282 "compare_and_write": false, 00:20:38.282 "abort": false, 00:20:38.282 "nvme_admin": false, 00:20:38.282 "nvme_io": false 00:20:38.282 }, 00:20:38.282 "memory_domains": [ 00:20:38.282 { 00:20:38.282 "dma_device_id": "system", 00:20:38.282 "dma_device_type": 1 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.282 "dma_device_type": 2 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "system", 00:20:38.282 "dma_device_type": 1 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.282 "dma_device_type": 2 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "system", 00:20:38.282 "dma_device_type": 1 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.282 "dma_device_type": 2 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "system", 00:20:38.282 "dma_device_type": 1 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.282 "dma_device_type": 2 00:20:38.282 } 00:20:38.282 ], 00:20:38.282 "driver_specific": { 00:20:38.282 "raid": { 00:20:38.282 "uuid": "8b89daf3-82a3-420e-a5e2-a5a7db657ad9", 00:20:38.282 "strip_size_kb": 64, 00:20:38.282 "state": "online", 00:20:38.282 "raid_level": "concat", 00:20:38.282 "superblock": false, 00:20:38.282 "num_base_bdevs": 4, 00:20:38.282 "num_base_bdevs_discovered": 4, 00:20:38.282 "num_base_bdevs_operational": 4, 00:20:38.282 "base_bdevs_list": [ 00:20:38.282 { 00:20:38.282 "name": "BaseBdev1", 00:20:38.282 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:38.282 "is_configured": true, 00:20:38.282 "data_offset": 0, 00:20:38.282 "data_size": 65536 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "name": "BaseBdev2", 00:20:38.282 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:38.282 "is_configured": true, 00:20:38.282 "data_offset": 0, 00:20:38.282 "data_size": 65536 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "name": "BaseBdev3", 00:20:38.282 "uuid": "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85", 00:20:38.282 "is_configured": true, 00:20:38.282 "data_offset": 0, 00:20:38.282 "data_size": 65536 00:20:38.282 }, 00:20:38.282 { 00:20:38.282 "name": "BaseBdev4", 00:20:38.282 "uuid": "1fc0eb66-15cb-42ff-a5bb-26f5af801926", 00:20:38.282 "is_configured": true, 00:20:38.282 "data_offset": 0, 00:20:38.282 "data_size": 65536 00:20:38.282 } 00:20:38.282 ] 00:20:38.282 } 00:20:38.282 } 00:20:38.282 }' 00:20:38.282 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.282 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:38.282 BaseBdev2 00:20:38.282 BaseBdev3 00:20:38.282 BaseBdev4' 00:20:38.282 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:38.282 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:38.282 19:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:38.541 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:38.541 "name": "BaseBdev1", 00:20:38.541 "aliases": [ 00:20:38.541 "df693274-a493-48cd-bd20-9fd0797d6861" 00:20:38.541 ], 00:20:38.541 "product_name": "Malloc disk", 00:20:38.541 "block_size": 512, 00:20:38.541 "num_blocks": 65536, 00:20:38.541 "uuid": "df693274-a493-48cd-bd20-9fd0797d6861", 00:20:38.541 "assigned_rate_limits": { 00:20:38.541 "rw_ios_per_sec": 0, 00:20:38.541 "rw_mbytes_per_sec": 0, 00:20:38.541 "r_mbytes_per_sec": 0, 00:20:38.541 "w_mbytes_per_sec": 0 00:20:38.541 }, 00:20:38.541 "claimed": true, 00:20:38.541 "claim_type": "exclusive_write", 00:20:38.541 "zoned": false, 00:20:38.541 "supported_io_types": { 00:20:38.541 "read": true, 00:20:38.541 "write": true, 00:20:38.541 "unmap": true, 00:20:38.541 "write_zeroes": true, 00:20:38.541 "flush": true, 00:20:38.541 "reset": true, 00:20:38.541 "compare": false, 00:20:38.541 "compare_and_write": false, 00:20:38.541 "abort": true, 00:20:38.541 "nvme_admin": false, 00:20:38.541 "nvme_io": false 00:20:38.541 }, 00:20:38.541 "memory_domains": [ 00:20:38.541 { 00:20:38.541 "dma_device_id": "system", 00:20:38.541 "dma_device_type": 1 00:20:38.541 }, 00:20:38.541 { 00:20:38.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.541 "dma_device_type": 2 00:20:38.541 } 00:20:38.541 ], 00:20:38.541 "driver_specific": {} 00:20:38.541 }' 00:20:38.541 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.541 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.541 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:38.541 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.541 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:38.800 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:39.058 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.058 "name": "BaseBdev2", 00:20:39.058 "aliases": [ 00:20:39.058 "879bd952-ec8a-4550-863d-921674d3e1e0" 00:20:39.058 ], 00:20:39.058 "product_name": "Malloc disk", 00:20:39.058 "block_size": 512, 00:20:39.058 "num_blocks": 65536, 00:20:39.058 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:39.058 "assigned_rate_limits": { 00:20:39.058 "rw_ios_per_sec": 0, 00:20:39.058 "rw_mbytes_per_sec": 0, 00:20:39.059 "r_mbytes_per_sec": 0, 00:20:39.059 "w_mbytes_per_sec": 0 00:20:39.059 }, 00:20:39.059 "claimed": true, 00:20:39.059 "claim_type": "exclusive_write", 00:20:39.059 "zoned": false, 00:20:39.059 "supported_io_types": { 00:20:39.059 "read": true, 00:20:39.059 "write": true, 00:20:39.059 "unmap": true, 00:20:39.059 "write_zeroes": true, 00:20:39.059 "flush": true, 00:20:39.059 "reset": true, 00:20:39.059 "compare": false, 00:20:39.059 "compare_and_write": false, 00:20:39.059 "abort": true, 00:20:39.059 "nvme_admin": false, 00:20:39.059 "nvme_io": false 00:20:39.059 }, 00:20:39.059 "memory_domains": [ 00:20:39.059 { 00:20:39.059 "dma_device_id": "system", 00:20:39.059 "dma_device_type": 1 00:20:39.059 }, 00:20:39.059 { 00:20:39.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.059 "dma_device_type": 2 00:20:39.059 } 00:20:39.059 ], 00:20:39.059 "driver_specific": {} 00:20:39.059 }' 00:20:39.059 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.059 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.318 19:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.318 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.318 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.577 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.577 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.577 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:39.577 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.577 "name": "BaseBdev3", 00:20:39.577 "aliases": [ 00:20:39.577 "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85" 00:20:39.577 ], 00:20:39.577 "product_name": "Malloc disk", 00:20:39.577 "block_size": 512, 00:20:39.577 "num_blocks": 65536, 00:20:39.577 "uuid": "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85", 00:20:39.577 "assigned_rate_limits": { 00:20:39.577 "rw_ios_per_sec": 0, 00:20:39.577 "rw_mbytes_per_sec": 0, 00:20:39.577 "r_mbytes_per_sec": 0, 00:20:39.577 "w_mbytes_per_sec": 0 00:20:39.577 }, 00:20:39.577 "claimed": true, 00:20:39.577 "claim_type": "exclusive_write", 00:20:39.577 "zoned": false, 00:20:39.577 "supported_io_types": { 00:20:39.577 "read": true, 00:20:39.577 "write": true, 00:20:39.577 "unmap": true, 00:20:39.577 "write_zeroes": true, 00:20:39.577 "flush": true, 00:20:39.577 "reset": true, 00:20:39.577 "compare": false, 00:20:39.577 "compare_and_write": false, 00:20:39.577 "abort": true, 00:20:39.577 "nvme_admin": false, 00:20:39.577 "nvme_io": false 00:20:39.577 }, 00:20:39.577 "memory_domains": [ 00:20:39.577 { 00:20:39.577 "dma_device_id": "system", 00:20:39.577 "dma_device_type": 1 00:20:39.577 }, 00:20:39.577 { 00:20:39.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.577 "dma_device_type": 2 00:20:39.577 } 00:20:39.577 ], 00:20:39.577 "driver_specific": {} 00:20:39.577 }' 00:20:39.577 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.835 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.094 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.094 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.094 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.094 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:40.094 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.353 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.353 "name": "BaseBdev4", 00:20:40.353 "aliases": [ 00:20:40.353 "1fc0eb66-15cb-42ff-a5bb-26f5af801926" 00:20:40.353 ], 00:20:40.353 "product_name": "Malloc disk", 00:20:40.353 "block_size": 512, 00:20:40.353 "num_blocks": 65536, 00:20:40.353 "uuid": "1fc0eb66-15cb-42ff-a5bb-26f5af801926", 00:20:40.353 "assigned_rate_limits": { 00:20:40.353 "rw_ios_per_sec": 0, 00:20:40.353 "rw_mbytes_per_sec": 0, 00:20:40.353 "r_mbytes_per_sec": 0, 00:20:40.353 "w_mbytes_per_sec": 0 00:20:40.353 }, 00:20:40.353 "claimed": true, 00:20:40.353 "claim_type": "exclusive_write", 00:20:40.353 "zoned": false, 00:20:40.353 "supported_io_types": { 00:20:40.353 "read": true, 00:20:40.353 "write": true, 00:20:40.353 "unmap": true, 00:20:40.353 "write_zeroes": true, 00:20:40.353 "flush": true, 00:20:40.353 "reset": true, 00:20:40.353 "compare": false, 00:20:40.353 "compare_and_write": false, 00:20:40.353 "abort": true, 00:20:40.353 "nvme_admin": false, 00:20:40.353 "nvme_io": false 00:20:40.353 }, 00:20:40.353 "memory_domains": [ 00:20:40.353 { 00:20:40.354 "dma_device_id": "system", 00:20:40.354 "dma_device_type": 1 00:20:40.354 }, 00:20:40.354 { 00:20:40.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.354 "dma_device_type": 2 00:20:40.354 } 00:20:40.354 ], 00:20:40.354 "driver_specific": {} 00:20:40.354 }' 00:20:40.354 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.354 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.354 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.354 19:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.354 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.354 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.354 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.354 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.613 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.613 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.613 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.613 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.613 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:40.871 [2024-06-10 19:04:55.441686] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:40.871 [2024-06-10 19:04:55.441709] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.871 [2024-06-10 19:04:55.441763] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.872 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.130 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.130 "name": "Existed_Raid", 00:20:41.130 "uuid": "8b89daf3-82a3-420e-a5e2-a5a7db657ad9", 00:20:41.130 "strip_size_kb": 64, 00:20:41.130 "state": "offline", 00:20:41.130 "raid_level": "concat", 00:20:41.130 "superblock": false, 00:20:41.130 "num_base_bdevs": 4, 00:20:41.130 "num_base_bdevs_discovered": 3, 00:20:41.130 "num_base_bdevs_operational": 3, 00:20:41.130 "base_bdevs_list": [ 00:20:41.130 { 00:20:41.130 "name": null, 00:20:41.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.130 "is_configured": false, 00:20:41.130 "data_offset": 0, 00:20:41.130 "data_size": 65536 00:20:41.130 }, 00:20:41.130 { 00:20:41.130 "name": "BaseBdev2", 00:20:41.130 "uuid": "879bd952-ec8a-4550-863d-921674d3e1e0", 00:20:41.130 "is_configured": true, 00:20:41.130 "data_offset": 0, 00:20:41.130 "data_size": 65536 00:20:41.130 }, 00:20:41.130 { 00:20:41.130 "name": "BaseBdev3", 00:20:41.130 "uuid": "bf8077dd-8cc6-4dbf-8c8a-2010a80fcc85", 00:20:41.130 "is_configured": true, 00:20:41.130 "data_offset": 0, 00:20:41.130 "data_size": 65536 00:20:41.130 }, 00:20:41.130 { 00:20:41.130 "name": "BaseBdev4", 00:20:41.130 "uuid": "1fc0eb66-15cb-42ff-a5bb-26f5af801926", 00:20:41.130 "is_configured": true, 00:20:41.130 "data_offset": 0, 00:20:41.130 "data_size": 65536 00:20:41.130 } 00:20:41.130 ] 00:20:41.130 }' 00:20:41.130 19:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.130 19:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.698 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:41.698 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:41.698 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.698 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:41.957 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:41.957 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.957 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:41.957 [2024-06-10 19:04:56.693971] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.216 19:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:42.475 [2024-06-10 19:04:57.153331] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:42.475 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.475 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.475 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.475 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.734 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.734 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.734 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:42.993 [2024-06-10 19:04:57.616684] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:42.993 [2024-06-10 19:04:57.616720] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d76820 name Existed_Raid, state offline 00:20:42.993 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.993 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.993 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.993 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:43.252 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:43.252 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:43.252 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:43.252 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:43.252 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:43.252 19:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.511 BaseBdev2 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:43.511 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.770 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.029 [ 00:20:44.029 { 00:20:44.029 "name": "BaseBdev2", 00:20:44.029 "aliases": [ 00:20:44.029 "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8" 00:20:44.029 ], 00:20:44.029 "product_name": "Malloc disk", 00:20:44.029 "block_size": 512, 00:20:44.029 "num_blocks": 65536, 00:20:44.029 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:44.029 "assigned_rate_limits": { 00:20:44.029 "rw_ios_per_sec": 0, 00:20:44.029 "rw_mbytes_per_sec": 0, 00:20:44.029 "r_mbytes_per_sec": 0, 00:20:44.029 "w_mbytes_per_sec": 0 00:20:44.029 }, 00:20:44.029 "claimed": false, 00:20:44.029 "zoned": false, 00:20:44.029 "supported_io_types": { 00:20:44.029 "read": true, 00:20:44.029 "write": true, 00:20:44.029 "unmap": true, 00:20:44.029 "write_zeroes": true, 00:20:44.029 "flush": true, 00:20:44.029 "reset": true, 00:20:44.029 "compare": false, 00:20:44.029 "compare_and_write": false, 00:20:44.029 "abort": true, 00:20:44.029 "nvme_admin": false, 00:20:44.029 "nvme_io": false 00:20:44.029 }, 00:20:44.029 "memory_domains": [ 00:20:44.029 { 00:20:44.029 "dma_device_id": "system", 00:20:44.029 "dma_device_type": 1 00:20:44.029 }, 00:20:44.029 { 00:20:44.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.029 "dma_device_type": 2 00:20:44.029 } 00:20:44.029 ], 00:20:44.029 "driver_specific": {} 00:20:44.029 } 00:20:44.030 ] 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:44.030 BaseBdev3 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:44.030 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.289 19:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:44.548 [ 00:20:44.548 { 00:20:44.548 "name": "BaseBdev3", 00:20:44.548 "aliases": [ 00:20:44.548 "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233" 00:20:44.548 ], 00:20:44.548 "product_name": "Malloc disk", 00:20:44.548 "block_size": 512, 00:20:44.548 "num_blocks": 65536, 00:20:44.548 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:44.548 "assigned_rate_limits": { 00:20:44.548 "rw_ios_per_sec": 0, 00:20:44.548 "rw_mbytes_per_sec": 0, 00:20:44.548 "r_mbytes_per_sec": 0, 00:20:44.548 "w_mbytes_per_sec": 0 00:20:44.548 }, 00:20:44.548 "claimed": false, 00:20:44.548 "zoned": false, 00:20:44.548 "supported_io_types": { 00:20:44.548 "read": true, 00:20:44.548 "write": true, 00:20:44.548 "unmap": true, 00:20:44.548 "write_zeroes": true, 00:20:44.548 "flush": true, 00:20:44.548 "reset": true, 00:20:44.548 "compare": false, 00:20:44.548 "compare_and_write": false, 00:20:44.548 "abort": true, 00:20:44.548 "nvme_admin": false, 00:20:44.548 "nvme_io": false 00:20:44.548 }, 00:20:44.548 "memory_domains": [ 00:20:44.548 { 00:20:44.548 "dma_device_id": "system", 00:20:44.548 "dma_device_type": 1 00:20:44.548 }, 00:20:44.548 { 00:20:44.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.548 "dma_device_type": 2 00:20:44.548 } 00:20:44.548 ], 00:20:44.548 "driver_specific": {} 00:20:44.548 } 00:20:44.548 ] 00:20:44.548 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:44.548 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.548 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.548 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:44.807 BaseBdev4 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:44.807 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.066 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:45.325 [ 00:20:45.325 { 00:20:45.325 "name": "BaseBdev4", 00:20:45.325 "aliases": [ 00:20:45.325 "14406cb3-a875-4a8b-9670-33c602674106" 00:20:45.325 ], 00:20:45.325 "product_name": "Malloc disk", 00:20:45.325 "block_size": 512, 00:20:45.325 "num_blocks": 65536, 00:20:45.325 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:45.325 "assigned_rate_limits": { 00:20:45.325 "rw_ios_per_sec": 0, 00:20:45.325 "rw_mbytes_per_sec": 0, 00:20:45.325 "r_mbytes_per_sec": 0, 00:20:45.325 "w_mbytes_per_sec": 0 00:20:45.325 }, 00:20:45.325 "claimed": false, 00:20:45.325 "zoned": false, 00:20:45.325 "supported_io_types": { 00:20:45.325 "read": true, 00:20:45.325 "write": true, 00:20:45.325 "unmap": true, 00:20:45.325 "write_zeroes": true, 00:20:45.325 "flush": true, 00:20:45.325 "reset": true, 00:20:45.325 "compare": false, 00:20:45.325 "compare_and_write": false, 00:20:45.325 "abort": true, 00:20:45.325 "nvme_admin": false, 00:20:45.325 "nvme_io": false 00:20:45.325 }, 00:20:45.325 "memory_domains": [ 00:20:45.325 { 00:20:45.325 "dma_device_id": "system", 00:20:45.325 "dma_device_type": 1 00:20:45.325 }, 00:20:45.325 { 00:20:45.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.325 "dma_device_type": 2 00:20:45.325 } 00:20:45.325 ], 00:20:45.325 "driver_specific": {} 00:20:45.325 } 00:20:45.325 ] 00:20:45.325 19:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:45.325 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:45.325 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:45.325 19:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:45.584 [2024-06-10 19:05:00.114033] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.584 [2024-06-10 19:05:00.114071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.584 [2024-06-10 19:05:00.114089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.584 [2024-06-10 19:05:00.115315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.584 [2024-06-10 19:05:00.115353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.584 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.843 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.843 "name": "Existed_Raid", 00:20:45.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.843 "strip_size_kb": 64, 00:20:45.843 "state": "configuring", 00:20:45.843 "raid_level": "concat", 00:20:45.843 "superblock": false, 00:20:45.843 "num_base_bdevs": 4, 00:20:45.843 "num_base_bdevs_discovered": 3, 00:20:45.843 "num_base_bdevs_operational": 4, 00:20:45.843 "base_bdevs_list": [ 00:20:45.843 { 00:20:45.843 "name": "BaseBdev1", 00:20:45.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.843 "is_configured": false, 00:20:45.843 "data_offset": 0, 00:20:45.843 "data_size": 0 00:20:45.843 }, 00:20:45.843 { 00:20:45.843 "name": "BaseBdev2", 00:20:45.843 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:45.843 "is_configured": true, 00:20:45.843 "data_offset": 0, 00:20:45.843 "data_size": 65536 00:20:45.843 }, 00:20:45.843 { 00:20:45.843 "name": "BaseBdev3", 00:20:45.843 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:45.843 "is_configured": true, 00:20:45.843 "data_offset": 0, 00:20:45.844 "data_size": 65536 00:20:45.844 }, 00:20:45.844 { 00:20:45.844 "name": "BaseBdev4", 00:20:45.844 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:45.844 "is_configured": true, 00:20:45.844 "data_offset": 0, 00:20:45.844 "data_size": 65536 00:20:45.844 } 00:20:45.844 ] 00:20:45.844 }' 00:20:45.844 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.844 19:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 19:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:46.412 [2024-06-10 19:05:01.156736] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.671 "name": "Existed_Raid", 00:20:46.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.671 "strip_size_kb": 64, 00:20:46.671 "state": "configuring", 00:20:46.671 "raid_level": "concat", 00:20:46.671 "superblock": false, 00:20:46.671 "num_base_bdevs": 4, 00:20:46.671 "num_base_bdevs_discovered": 2, 00:20:46.671 "num_base_bdevs_operational": 4, 00:20:46.671 "base_bdevs_list": [ 00:20:46.671 { 00:20:46.671 "name": "BaseBdev1", 00:20:46.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.671 "is_configured": false, 00:20:46.671 "data_offset": 0, 00:20:46.671 "data_size": 0 00:20:46.671 }, 00:20:46.671 { 00:20:46.671 "name": null, 00:20:46.671 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:46.671 "is_configured": false, 00:20:46.671 "data_offset": 0, 00:20:46.671 "data_size": 65536 00:20:46.671 }, 00:20:46.671 { 00:20:46.671 "name": "BaseBdev3", 00:20:46.671 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:46.671 "is_configured": true, 00:20:46.671 "data_offset": 0, 00:20:46.671 "data_size": 65536 00:20:46.671 }, 00:20:46.671 { 00:20:46.671 "name": "BaseBdev4", 00:20:46.671 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:46.671 "is_configured": true, 00:20:46.671 "data_offset": 0, 00:20:46.671 "data_size": 65536 00:20:46.671 } 00:20:46.671 ] 00:20:46.671 }' 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.671 19:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.240 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.240 19:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:47.499 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:47.499 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:47.758 [2024-06-10 19:05:02.375114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.758 BaseBdev1 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:47.758 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.016 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.276 [ 00:20:48.276 { 00:20:48.276 "name": "BaseBdev1", 00:20:48.276 "aliases": [ 00:20:48.276 "5eba9c7c-8899-4445-9905-5e677e77dd48" 00:20:48.276 ], 00:20:48.276 "product_name": "Malloc disk", 00:20:48.276 "block_size": 512, 00:20:48.276 "num_blocks": 65536, 00:20:48.276 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:48.276 "assigned_rate_limits": { 00:20:48.276 "rw_ios_per_sec": 0, 00:20:48.276 "rw_mbytes_per_sec": 0, 00:20:48.276 "r_mbytes_per_sec": 0, 00:20:48.276 "w_mbytes_per_sec": 0 00:20:48.276 }, 00:20:48.276 "claimed": true, 00:20:48.276 "claim_type": "exclusive_write", 00:20:48.276 "zoned": false, 00:20:48.276 "supported_io_types": { 00:20:48.276 "read": true, 00:20:48.276 "write": true, 00:20:48.276 "unmap": true, 00:20:48.276 "write_zeroes": true, 00:20:48.276 "flush": true, 00:20:48.276 "reset": true, 00:20:48.276 "compare": false, 00:20:48.276 "compare_and_write": false, 00:20:48.276 "abort": true, 00:20:48.276 "nvme_admin": false, 00:20:48.276 "nvme_io": false 00:20:48.276 }, 00:20:48.276 "memory_domains": [ 00:20:48.276 { 00:20:48.276 "dma_device_id": "system", 00:20:48.276 "dma_device_type": 1 00:20:48.276 }, 00:20:48.276 { 00:20:48.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.276 "dma_device_type": 2 00:20:48.276 } 00:20:48.276 ], 00:20:48.276 "driver_specific": {} 00:20:48.276 } 00:20:48.276 ] 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.276 19:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.535 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.535 "name": "Existed_Raid", 00:20:48.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.535 "strip_size_kb": 64, 00:20:48.535 "state": "configuring", 00:20:48.535 "raid_level": "concat", 00:20:48.535 "superblock": false, 00:20:48.535 "num_base_bdevs": 4, 00:20:48.535 "num_base_bdevs_discovered": 3, 00:20:48.535 "num_base_bdevs_operational": 4, 00:20:48.535 "base_bdevs_list": [ 00:20:48.535 { 00:20:48.535 "name": "BaseBdev1", 00:20:48.535 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:48.535 "is_configured": true, 00:20:48.535 "data_offset": 0, 00:20:48.535 "data_size": 65536 00:20:48.535 }, 00:20:48.535 { 00:20:48.535 "name": null, 00:20:48.535 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:48.535 "is_configured": false, 00:20:48.535 "data_offset": 0, 00:20:48.535 "data_size": 65536 00:20:48.535 }, 00:20:48.535 { 00:20:48.535 "name": "BaseBdev3", 00:20:48.535 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:48.535 "is_configured": true, 00:20:48.535 "data_offset": 0, 00:20:48.535 "data_size": 65536 00:20:48.535 }, 00:20:48.535 { 00:20:48.535 "name": "BaseBdev4", 00:20:48.535 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:48.535 "is_configured": true, 00:20:48.535 "data_offset": 0, 00:20:48.535 "data_size": 65536 00:20:48.535 } 00:20:48.535 ] 00:20:48.535 }' 00:20:48.535 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.535 19:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.103 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.103 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.103 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:49.103 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:49.363 [2024-06-10 19:05:03.967359] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.363 19:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.622 19:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.622 "name": "Existed_Raid", 00:20:49.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.622 "strip_size_kb": 64, 00:20:49.622 "state": "configuring", 00:20:49.622 "raid_level": "concat", 00:20:49.622 "superblock": false, 00:20:49.622 "num_base_bdevs": 4, 00:20:49.622 "num_base_bdevs_discovered": 2, 00:20:49.622 "num_base_bdevs_operational": 4, 00:20:49.622 "base_bdevs_list": [ 00:20:49.622 { 00:20:49.622 "name": "BaseBdev1", 00:20:49.622 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:49.622 "is_configured": true, 00:20:49.622 "data_offset": 0, 00:20:49.622 "data_size": 65536 00:20:49.622 }, 00:20:49.622 { 00:20:49.622 "name": null, 00:20:49.622 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:49.622 "is_configured": false, 00:20:49.622 "data_offset": 0, 00:20:49.622 "data_size": 65536 00:20:49.622 }, 00:20:49.622 { 00:20:49.622 "name": null, 00:20:49.622 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:49.622 "is_configured": false, 00:20:49.622 "data_offset": 0, 00:20:49.622 "data_size": 65536 00:20:49.622 }, 00:20:49.622 { 00:20:49.622 "name": "BaseBdev4", 00:20:49.622 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:49.622 "is_configured": true, 00:20:49.622 "data_offset": 0, 00:20:49.622 "data_size": 65536 00:20:49.622 } 00:20:49.622 ] 00:20:49.622 }' 00:20:49.622 19:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.622 19:05:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.190 19:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.190 19:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:50.449 19:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:50.449 19:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:50.449 [2024-06-10 19:05:05.190606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:50.707 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:50.707 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.707 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.708 "name": "Existed_Raid", 00:20:50.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.708 "strip_size_kb": 64, 00:20:50.708 "state": "configuring", 00:20:50.708 "raid_level": "concat", 00:20:50.708 "superblock": false, 00:20:50.708 "num_base_bdevs": 4, 00:20:50.708 "num_base_bdevs_discovered": 3, 00:20:50.708 "num_base_bdevs_operational": 4, 00:20:50.708 "base_bdevs_list": [ 00:20:50.708 { 00:20:50.708 "name": "BaseBdev1", 00:20:50.708 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:50.708 "is_configured": true, 00:20:50.708 "data_offset": 0, 00:20:50.708 "data_size": 65536 00:20:50.708 }, 00:20:50.708 { 00:20:50.708 "name": null, 00:20:50.708 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:50.708 "is_configured": false, 00:20:50.708 "data_offset": 0, 00:20:50.708 "data_size": 65536 00:20:50.708 }, 00:20:50.708 { 00:20:50.708 "name": "BaseBdev3", 00:20:50.708 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:50.708 "is_configured": true, 00:20:50.708 "data_offset": 0, 00:20:50.708 "data_size": 65536 00:20:50.708 }, 00:20:50.708 { 00:20:50.708 "name": "BaseBdev4", 00:20:50.708 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:50.708 "is_configured": true, 00:20:50.708 "data_offset": 0, 00:20:50.708 "data_size": 65536 00:20:50.708 } 00:20:50.708 ] 00:20:50.708 }' 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.708 19:05:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.276 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.276 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:51.534 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:51.534 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:51.793 [2024-06-10 19:05:06.442022] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.793 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.052 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.052 "name": "Existed_Raid", 00:20:52.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.052 "strip_size_kb": 64, 00:20:52.052 "state": "configuring", 00:20:52.052 "raid_level": "concat", 00:20:52.052 "superblock": false, 00:20:52.052 "num_base_bdevs": 4, 00:20:52.052 "num_base_bdevs_discovered": 2, 00:20:52.052 "num_base_bdevs_operational": 4, 00:20:52.052 "base_bdevs_list": [ 00:20:52.052 { 00:20:52.052 "name": null, 00:20:52.052 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:52.052 "is_configured": false, 00:20:52.052 "data_offset": 0, 00:20:52.052 "data_size": 65536 00:20:52.052 }, 00:20:52.052 { 00:20:52.052 "name": null, 00:20:52.052 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:52.052 "is_configured": false, 00:20:52.052 "data_offset": 0, 00:20:52.052 "data_size": 65536 00:20:52.052 }, 00:20:52.052 { 00:20:52.052 "name": "BaseBdev3", 00:20:52.052 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:52.052 "is_configured": true, 00:20:52.052 "data_offset": 0, 00:20:52.052 "data_size": 65536 00:20:52.052 }, 00:20:52.052 { 00:20:52.052 "name": "BaseBdev4", 00:20:52.052 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:52.052 "is_configured": true, 00:20:52.052 "data_offset": 0, 00:20:52.052 "data_size": 65536 00:20:52.052 } 00:20:52.052 ] 00:20:52.052 }' 00:20:52.052 19:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.052 19:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.620 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.620 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:52.879 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:52.879 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:53.138 [2024-06-10 19:05:07.727395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.138 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.397 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.397 "name": "Existed_Raid", 00:20:53.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.397 "strip_size_kb": 64, 00:20:53.397 "state": "configuring", 00:20:53.397 "raid_level": "concat", 00:20:53.397 "superblock": false, 00:20:53.397 "num_base_bdevs": 4, 00:20:53.397 "num_base_bdevs_discovered": 3, 00:20:53.397 "num_base_bdevs_operational": 4, 00:20:53.397 "base_bdevs_list": [ 00:20:53.398 { 00:20:53.398 "name": null, 00:20:53.398 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:53.398 "is_configured": false, 00:20:53.398 "data_offset": 0, 00:20:53.398 "data_size": 65536 00:20:53.398 }, 00:20:53.398 { 00:20:53.398 "name": "BaseBdev2", 00:20:53.398 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:53.398 "is_configured": true, 00:20:53.398 "data_offset": 0, 00:20:53.398 "data_size": 65536 00:20:53.398 }, 00:20:53.398 { 00:20:53.398 "name": "BaseBdev3", 00:20:53.398 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:53.398 "is_configured": true, 00:20:53.398 "data_offset": 0, 00:20:53.398 "data_size": 65536 00:20:53.398 }, 00:20:53.398 { 00:20:53.398 "name": "BaseBdev4", 00:20:53.398 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:53.398 "is_configured": true, 00:20:53.398 "data_offset": 0, 00:20:53.398 "data_size": 65536 00:20:53.398 } 00:20:53.398 ] 00:20:53.398 }' 00:20:53.398 19:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.398 19:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.965 19:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.965 19:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:54.224 19:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:54.224 19:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.224 19:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:54.483 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5eba9c7c-8899-4445-9905-5e677e77dd48 00:20:54.483 [2024-06-10 19:05:09.226676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:54.483 [2024-06-10 19:05:09.226708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d76500 00:20:54.483 [2024-06-10 19:05:09.226717] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:54.483 [2024-06-10 19:05:09.226894] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c8af80 00:20:54.483 [2024-06-10 19:05:09.227000] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d76500 00:20:54.483 [2024-06-10 19:05:09.227009] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d76500 00:20:54.483 [2024-06-10 19:05:09.227153] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.483 NewBaseBdev 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.742 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:55.001 [ 00:20:55.001 { 00:20:55.001 "name": "NewBaseBdev", 00:20:55.001 "aliases": [ 00:20:55.001 "5eba9c7c-8899-4445-9905-5e677e77dd48" 00:20:55.001 ], 00:20:55.001 "product_name": "Malloc disk", 00:20:55.001 "block_size": 512, 00:20:55.002 "num_blocks": 65536, 00:20:55.002 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:55.002 "assigned_rate_limits": { 00:20:55.002 "rw_ios_per_sec": 0, 00:20:55.002 "rw_mbytes_per_sec": 0, 00:20:55.002 "r_mbytes_per_sec": 0, 00:20:55.002 "w_mbytes_per_sec": 0 00:20:55.002 }, 00:20:55.002 "claimed": true, 00:20:55.002 "claim_type": "exclusive_write", 00:20:55.002 "zoned": false, 00:20:55.002 "supported_io_types": { 00:20:55.002 "read": true, 00:20:55.002 "write": true, 00:20:55.002 "unmap": true, 00:20:55.002 "write_zeroes": true, 00:20:55.002 "flush": true, 00:20:55.002 "reset": true, 00:20:55.002 "compare": false, 00:20:55.002 "compare_and_write": false, 00:20:55.002 "abort": true, 00:20:55.002 "nvme_admin": false, 00:20:55.002 "nvme_io": false 00:20:55.002 }, 00:20:55.002 "memory_domains": [ 00:20:55.002 { 00:20:55.002 "dma_device_id": "system", 00:20:55.002 "dma_device_type": 1 00:20:55.002 }, 00:20:55.002 { 00:20:55.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.002 "dma_device_type": 2 00:20:55.002 } 00:20:55.002 ], 00:20:55.002 "driver_specific": {} 00:20:55.002 } 00:20:55.002 ] 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.002 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.262 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.262 "name": "Existed_Raid", 00:20:55.262 "uuid": "a4ec48a5-a0af-484c-bb0c-3f795c4af0a4", 00:20:55.262 "strip_size_kb": 64, 00:20:55.262 "state": "online", 00:20:55.262 "raid_level": "concat", 00:20:55.262 "superblock": false, 00:20:55.262 "num_base_bdevs": 4, 00:20:55.262 "num_base_bdevs_discovered": 4, 00:20:55.262 "num_base_bdevs_operational": 4, 00:20:55.262 "base_bdevs_list": [ 00:20:55.262 { 00:20:55.262 "name": "NewBaseBdev", 00:20:55.262 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:55.262 "is_configured": true, 00:20:55.262 "data_offset": 0, 00:20:55.262 "data_size": 65536 00:20:55.262 }, 00:20:55.262 { 00:20:55.262 "name": "BaseBdev2", 00:20:55.262 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:55.262 "is_configured": true, 00:20:55.262 "data_offset": 0, 00:20:55.262 "data_size": 65536 00:20:55.262 }, 00:20:55.262 { 00:20:55.262 "name": "BaseBdev3", 00:20:55.262 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:55.262 "is_configured": true, 00:20:55.262 "data_offset": 0, 00:20:55.262 "data_size": 65536 00:20:55.262 }, 00:20:55.262 { 00:20:55.262 "name": "BaseBdev4", 00:20:55.262 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:55.262 "is_configured": true, 00:20:55.262 "data_offset": 0, 00:20:55.262 "data_size": 65536 00:20:55.262 } 00:20:55.262 ] 00:20:55.262 }' 00:20:55.262 19:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.262 19:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:55.831 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:56.090 [2024-06-10 19:05:10.654716] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.090 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:56.090 "name": "Existed_Raid", 00:20:56.090 "aliases": [ 00:20:56.090 "a4ec48a5-a0af-484c-bb0c-3f795c4af0a4" 00:20:56.090 ], 00:20:56.090 "product_name": "Raid Volume", 00:20:56.090 "block_size": 512, 00:20:56.090 "num_blocks": 262144, 00:20:56.090 "uuid": "a4ec48a5-a0af-484c-bb0c-3f795c4af0a4", 00:20:56.090 "assigned_rate_limits": { 00:20:56.090 "rw_ios_per_sec": 0, 00:20:56.090 "rw_mbytes_per_sec": 0, 00:20:56.090 "r_mbytes_per_sec": 0, 00:20:56.090 "w_mbytes_per_sec": 0 00:20:56.090 }, 00:20:56.090 "claimed": false, 00:20:56.090 "zoned": false, 00:20:56.090 "supported_io_types": { 00:20:56.090 "read": true, 00:20:56.090 "write": true, 00:20:56.090 "unmap": true, 00:20:56.090 "write_zeroes": true, 00:20:56.090 "flush": true, 00:20:56.090 "reset": true, 00:20:56.090 "compare": false, 00:20:56.090 "compare_and_write": false, 00:20:56.090 "abort": false, 00:20:56.090 "nvme_admin": false, 00:20:56.090 "nvme_io": false 00:20:56.090 }, 00:20:56.090 "memory_domains": [ 00:20:56.090 { 00:20:56.090 "dma_device_id": "system", 00:20:56.090 "dma_device_type": 1 00:20:56.090 }, 00:20:56.090 { 00:20:56.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.090 "dma_device_type": 2 00:20:56.090 }, 00:20:56.090 { 00:20:56.091 "dma_device_id": "system", 00:20:56.091 "dma_device_type": 1 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.091 "dma_device_type": 2 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "dma_device_id": "system", 00:20:56.091 "dma_device_type": 1 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.091 "dma_device_type": 2 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "dma_device_id": "system", 00:20:56.091 "dma_device_type": 1 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.091 "dma_device_type": 2 00:20:56.091 } 00:20:56.091 ], 00:20:56.091 "driver_specific": { 00:20:56.091 "raid": { 00:20:56.091 "uuid": "a4ec48a5-a0af-484c-bb0c-3f795c4af0a4", 00:20:56.091 "strip_size_kb": 64, 00:20:56.091 "state": "online", 00:20:56.091 "raid_level": "concat", 00:20:56.091 "superblock": false, 00:20:56.091 "num_base_bdevs": 4, 00:20:56.091 "num_base_bdevs_discovered": 4, 00:20:56.091 "num_base_bdevs_operational": 4, 00:20:56.091 "base_bdevs_list": [ 00:20:56.091 { 00:20:56.091 "name": "NewBaseBdev", 00:20:56.091 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:56.091 "is_configured": true, 00:20:56.091 "data_offset": 0, 00:20:56.091 "data_size": 65536 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "name": "BaseBdev2", 00:20:56.091 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:56.091 "is_configured": true, 00:20:56.091 "data_offset": 0, 00:20:56.091 "data_size": 65536 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "name": "BaseBdev3", 00:20:56.091 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:56.091 "is_configured": true, 00:20:56.091 "data_offset": 0, 00:20:56.091 "data_size": 65536 00:20:56.091 }, 00:20:56.091 { 00:20:56.091 "name": "BaseBdev4", 00:20:56.091 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:56.091 "is_configured": true, 00:20:56.091 "data_offset": 0, 00:20:56.091 "data_size": 65536 00:20:56.091 } 00:20:56.091 ] 00:20:56.091 } 00:20:56.091 } 00:20:56.091 }' 00:20:56.091 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.091 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:56.091 BaseBdev2 00:20:56.091 BaseBdev3 00:20:56.091 BaseBdev4' 00:20:56.091 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.091 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.091 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:56.350 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.350 "name": "NewBaseBdev", 00:20:56.350 "aliases": [ 00:20:56.350 "5eba9c7c-8899-4445-9905-5e677e77dd48" 00:20:56.350 ], 00:20:56.350 "product_name": "Malloc disk", 00:20:56.350 "block_size": 512, 00:20:56.350 "num_blocks": 65536, 00:20:56.350 "uuid": "5eba9c7c-8899-4445-9905-5e677e77dd48", 00:20:56.350 "assigned_rate_limits": { 00:20:56.350 "rw_ios_per_sec": 0, 00:20:56.350 "rw_mbytes_per_sec": 0, 00:20:56.350 "r_mbytes_per_sec": 0, 00:20:56.350 "w_mbytes_per_sec": 0 00:20:56.350 }, 00:20:56.350 "claimed": true, 00:20:56.350 "claim_type": "exclusive_write", 00:20:56.350 "zoned": false, 00:20:56.350 "supported_io_types": { 00:20:56.350 "read": true, 00:20:56.350 "write": true, 00:20:56.350 "unmap": true, 00:20:56.350 "write_zeroes": true, 00:20:56.350 "flush": true, 00:20:56.350 "reset": true, 00:20:56.350 "compare": false, 00:20:56.350 "compare_and_write": false, 00:20:56.350 "abort": true, 00:20:56.350 "nvme_admin": false, 00:20:56.350 "nvme_io": false 00:20:56.350 }, 00:20:56.350 "memory_domains": [ 00:20:56.350 { 00:20:56.350 "dma_device_id": "system", 00:20:56.350 "dma_device_type": 1 00:20:56.350 }, 00:20:56.350 { 00:20:56.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.350 "dma_device_type": 2 00:20:56.350 } 00:20:56.350 ], 00:20:56.350 "driver_specific": {} 00:20:56.350 }' 00:20:56.350 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.350 19:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.350 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.350 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.350 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:56.609 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.868 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.868 "name": "BaseBdev2", 00:20:56.868 "aliases": [ 00:20:56.868 "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8" 00:20:56.868 ], 00:20:56.868 "product_name": "Malloc disk", 00:20:56.868 "block_size": 512, 00:20:56.868 "num_blocks": 65536, 00:20:56.868 "uuid": "b91e1b58-f863-4ab7-b52c-bf9e6ec14df8", 00:20:56.868 "assigned_rate_limits": { 00:20:56.868 "rw_ios_per_sec": 0, 00:20:56.868 "rw_mbytes_per_sec": 0, 00:20:56.868 "r_mbytes_per_sec": 0, 00:20:56.868 "w_mbytes_per_sec": 0 00:20:56.868 }, 00:20:56.868 "claimed": true, 00:20:56.868 "claim_type": "exclusive_write", 00:20:56.868 "zoned": false, 00:20:56.868 "supported_io_types": { 00:20:56.868 "read": true, 00:20:56.868 "write": true, 00:20:56.868 "unmap": true, 00:20:56.868 "write_zeroes": true, 00:20:56.868 "flush": true, 00:20:56.868 "reset": true, 00:20:56.868 "compare": false, 00:20:56.868 "compare_and_write": false, 00:20:56.868 "abort": true, 00:20:56.868 "nvme_admin": false, 00:20:56.868 "nvme_io": false 00:20:56.868 }, 00:20:56.868 "memory_domains": [ 00:20:56.868 { 00:20:56.868 "dma_device_id": "system", 00:20:56.868 "dma_device_type": 1 00:20:56.868 }, 00:20:56.868 { 00:20:56.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.868 "dma_device_type": 2 00:20:56.868 } 00:20:56.868 ], 00:20:56.868 "driver_specific": {} 00:20:56.868 }' 00:20:56.868 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.868 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.868 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.868 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:57.128 19:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.387 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.387 "name": "BaseBdev3", 00:20:57.387 "aliases": [ 00:20:57.387 "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233" 00:20:57.387 ], 00:20:57.387 "product_name": "Malloc disk", 00:20:57.387 "block_size": 512, 00:20:57.387 "num_blocks": 65536, 00:20:57.387 "uuid": "ef2bf87f-54b7-4ba0-b7e4-c41f7b053233", 00:20:57.387 "assigned_rate_limits": { 00:20:57.387 "rw_ios_per_sec": 0, 00:20:57.387 "rw_mbytes_per_sec": 0, 00:20:57.387 "r_mbytes_per_sec": 0, 00:20:57.387 "w_mbytes_per_sec": 0 00:20:57.387 }, 00:20:57.387 "claimed": true, 00:20:57.387 "claim_type": "exclusive_write", 00:20:57.387 "zoned": false, 00:20:57.387 "supported_io_types": { 00:20:57.387 "read": true, 00:20:57.387 "write": true, 00:20:57.387 "unmap": true, 00:20:57.387 "write_zeroes": true, 00:20:57.387 "flush": true, 00:20:57.388 "reset": true, 00:20:57.388 "compare": false, 00:20:57.388 "compare_and_write": false, 00:20:57.388 "abort": true, 00:20:57.388 "nvme_admin": false, 00:20:57.388 "nvme_io": false 00:20:57.388 }, 00:20:57.388 "memory_domains": [ 00:20:57.388 { 00:20:57.388 "dma_device_id": "system", 00:20:57.388 "dma_device_type": 1 00:20:57.388 }, 00:20:57.388 { 00:20:57.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.388 "dma_device_type": 2 00:20:57.388 } 00:20:57.388 ], 00:20:57.388 "driver_specific": {} 00:20:57.388 }' 00:20:57.388 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.388 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.645 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.645 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.645 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.645 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.645 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.645 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.646 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.646 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.646 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.904 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.904 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.904 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:57.904 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.904 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.904 "name": "BaseBdev4", 00:20:57.904 "aliases": [ 00:20:57.904 "14406cb3-a875-4a8b-9670-33c602674106" 00:20:57.904 ], 00:20:57.904 "product_name": "Malloc disk", 00:20:57.904 "block_size": 512, 00:20:57.904 "num_blocks": 65536, 00:20:57.904 "uuid": "14406cb3-a875-4a8b-9670-33c602674106", 00:20:57.904 "assigned_rate_limits": { 00:20:57.904 "rw_ios_per_sec": 0, 00:20:57.904 "rw_mbytes_per_sec": 0, 00:20:57.904 "r_mbytes_per_sec": 0, 00:20:57.904 "w_mbytes_per_sec": 0 00:20:57.904 }, 00:20:57.904 "claimed": true, 00:20:57.904 "claim_type": "exclusive_write", 00:20:57.904 "zoned": false, 00:20:57.904 "supported_io_types": { 00:20:57.904 "read": true, 00:20:57.904 "write": true, 00:20:57.904 "unmap": true, 00:20:57.904 "write_zeroes": true, 00:20:57.904 "flush": true, 00:20:57.904 "reset": true, 00:20:57.904 "compare": false, 00:20:57.904 "compare_and_write": false, 00:20:57.904 "abort": true, 00:20:57.904 "nvme_admin": false, 00:20:57.904 "nvme_io": false 00:20:57.904 }, 00:20:57.904 "memory_domains": [ 00:20:57.904 { 00:20:57.904 "dma_device_id": "system", 00:20:57.904 "dma_device_type": 1 00:20:57.904 }, 00:20:57.904 { 00:20:57.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.905 "dma_device_type": 2 00:20:57.905 } 00:20:57.905 ], 00:20:57.905 "driver_specific": {} 00:20:57.905 }' 00:20:57.905 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.164 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.423 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.424 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.424 19:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:58.684 [2024-06-10 19:05:13.181122] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:58.684 [2024-06-10 19:05:13.181145] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.684 [2024-06-10 19:05:13.181189] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.684 [2024-06-10 19:05:13.181240] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.684 [2024-06-10 19:05:13.181251] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d76500 name Existed_Raid, state offline 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1708442 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1708442 ']' 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1708442 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1708442 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1708442' 00:20:58.684 killing process with pid 1708442 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1708442 00:20:58.684 [2024-06-10 19:05:13.261003] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:58.684 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1708442 00:20:58.684 [2024-06-10 19:05:13.293052] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:58.943 00:20:58.943 real 0m30.393s 00:20:58.943 user 0m55.738s 00:20:58.943 sys 0m5.485s 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.943 ************************************ 00:20:58.943 END TEST raid_state_function_test 00:20:58.943 ************************************ 00:20:58.943 19:05:13 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:20:58.943 19:05:13 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:20:58.943 19:05:13 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:58.943 19:05:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.943 ************************************ 00:20:58.943 START TEST raid_state_function_test_sb 00:20:58.943 ************************************ 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 4 true 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.943 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1714291 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1714291' 00:20:58.944 Process raid pid: 1714291 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1714291 /var/tmp/spdk-raid.sock 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1714291 ']' 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:58.944 19:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.944 [2024-06-10 19:05:13.635911] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:20:58.944 [2024-06-10 19:05:13.635967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.0 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.1 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.2 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.3 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.4 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.5 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.6 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:01.7 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.0 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.1 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.2 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.3 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.4 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.5 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.6 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b6:02.7 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.0 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.1 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.2 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.3 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.4 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.5 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.6 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:01.7 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.0 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.1 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.2 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.3 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.4 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.5 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.6 cannot be used 00:20:59.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.204 EAL: Requested device 0000:b8:02.7 cannot be used 00:20:59.204 [2024-06-10 19:05:13.771566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.204 [2024-06-10 19:05:13.858174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.204 [2024-06-10 19:05:13.927311] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.204 [2024-06-10 19:05:13.927340] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:00.196 [2024-06-10 19:05:14.745280] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:00.196 [2024-06-10 19:05:14.745320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:00.196 [2024-06-10 19:05:14.745330] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:00.196 [2024-06-10 19:05:14.745342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:00.196 [2024-06-10 19:05:14.745350] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:00.196 [2024-06-10 19:05:14.745360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:00.196 [2024-06-10 19:05:14.745368] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:00.196 [2024-06-10 19:05:14.745383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.196 19:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.495 19:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.495 "name": "Existed_Raid", 00:21:00.495 "uuid": "63f06ba7-3abe-45db-a80e-0f0c54777211", 00:21:00.495 "strip_size_kb": 64, 00:21:00.495 "state": "configuring", 00:21:00.495 "raid_level": "concat", 00:21:00.495 "superblock": true, 00:21:00.495 "num_base_bdevs": 4, 00:21:00.495 "num_base_bdevs_discovered": 0, 00:21:00.495 "num_base_bdevs_operational": 4, 00:21:00.495 "base_bdevs_list": [ 00:21:00.495 { 00:21:00.495 "name": "BaseBdev1", 00:21:00.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.495 "is_configured": false, 00:21:00.495 "data_offset": 0, 00:21:00.495 "data_size": 0 00:21:00.495 }, 00:21:00.495 { 00:21:00.495 "name": "BaseBdev2", 00:21:00.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.495 "is_configured": false, 00:21:00.495 "data_offset": 0, 00:21:00.495 "data_size": 0 00:21:00.495 }, 00:21:00.495 { 00:21:00.495 "name": "BaseBdev3", 00:21:00.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.495 "is_configured": false, 00:21:00.495 "data_offset": 0, 00:21:00.495 "data_size": 0 00:21:00.495 }, 00:21:00.495 { 00:21:00.495 "name": "BaseBdev4", 00:21:00.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.495 "is_configured": false, 00:21:00.495 "data_offset": 0, 00:21:00.496 "data_size": 0 00:21:00.496 } 00:21:00.496 ] 00:21:00.496 }' 00:21:00.496 19:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.496 19:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.063 19:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:01.063 [2024-06-10 19:05:15.759795] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:01.063 [2024-06-10 19:05:15.759826] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x241bf50 name Existed_Raid, state configuring 00:21:01.063 19:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:01.322 [2024-06-10 19:05:15.988437] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.322 [2024-06-10 19:05:15.988467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.322 [2024-06-10 19:05:15.988476] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:01.322 [2024-06-10 19:05:15.988486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:01.322 [2024-06-10 19:05:15.988494] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:01.322 [2024-06-10 19:05:15.988514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:01.322 [2024-06-10 19:05:15.988522] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:01.322 [2024-06-10 19:05:15.988532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:01.322 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:01.581 [2024-06-10 19:05:16.230452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.581 BaseBdev1 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:01.581 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:01.840 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:02.099 [ 00:21:02.099 { 00:21:02.099 "name": "BaseBdev1", 00:21:02.099 "aliases": [ 00:21:02.099 "edba5fc2-b253-4b93-b5bc-90b504388164" 00:21:02.099 ], 00:21:02.099 "product_name": "Malloc disk", 00:21:02.099 "block_size": 512, 00:21:02.099 "num_blocks": 65536, 00:21:02.099 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:02.099 "assigned_rate_limits": { 00:21:02.099 "rw_ios_per_sec": 0, 00:21:02.099 "rw_mbytes_per_sec": 0, 00:21:02.099 "r_mbytes_per_sec": 0, 00:21:02.099 "w_mbytes_per_sec": 0 00:21:02.099 }, 00:21:02.099 "claimed": true, 00:21:02.099 "claim_type": "exclusive_write", 00:21:02.099 "zoned": false, 00:21:02.099 "supported_io_types": { 00:21:02.099 "read": true, 00:21:02.099 "write": true, 00:21:02.099 "unmap": true, 00:21:02.099 "write_zeroes": true, 00:21:02.099 "flush": true, 00:21:02.099 "reset": true, 00:21:02.099 "compare": false, 00:21:02.099 "compare_and_write": false, 00:21:02.099 "abort": true, 00:21:02.099 "nvme_admin": false, 00:21:02.099 "nvme_io": false 00:21:02.099 }, 00:21:02.099 "memory_domains": [ 00:21:02.099 { 00:21:02.099 "dma_device_id": "system", 00:21:02.099 "dma_device_type": 1 00:21:02.099 }, 00:21:02.099 { 00:21:02.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.099 "dma_device_type": 2 00:21:02.099 } 00:21:02.099 ], 00:21:02.099 "driver_specific": {} 00:21:02.099 } 00:21:02.099 ] 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.099 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.358 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.359 "name": "Existed_Raid", 00:21:02.359 "uuid": "405f5fb9-d199-4a90-a4ce-3b9f2e90f075", 00:21:02.359 "strip_size_kb": 64, 00:21:02.359 "state": "configuring", 00:21:02.359 "raid_level": "concat", 00:21:02.359 "superblock": true, 00:21:02.359 "num_base_bdevs": 4, 00:21:02.359 "num_base_bdevs_discovered": 1, 00:21:02.359 "num_base_bdevs_operational": 4, 00:21:02.359 "base_bdevs_list": [ 00:21:02.359 { 00:21:02.359 "name": "BaseBdev1", 00:21:02.359 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:02.359 "is_configured": true, 00:21:02.359 "data_offset": 2048, 00:21:02.359 "data_size": 63488 00:21:02.359 }, 00:21:02.359 { 00:21:02.359 "name": "BaseBdev2", 00:21:02.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.359 "is_configured": false, 00:21:02.359 "data_offset": 0, 00:21:02.359 "data_size": 0 00:21:02.359 }, 00:21:02.359 { 00:21:02.359 "name": "BaseBdev3", 00:21:02.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.359 "is_configured": false, 00:21:02.359 "data_offset": 0, 00:21:02.359 "data_size": 0 00:21:02.359 }, 00:21:02.359 { 00:21:02.359 "name": "BaseBdev4", 00:21:02.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.359 "is_configured": false, 00:21:02.359 "data_offset": 0, 00:21:02.359 "data_size": 0 00:21:02.359 } 00:21:02.359 ] 00:21:02.359 }' 00:21:02.359 19:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.359 19:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.926 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:03.186 [2024-06-10 19:05:17.686273] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:03.186 [2024-06-10 19:05:17.686313] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x241b7c0 name Existed_Raid, state configuring 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:03.186 [2024-06-10 19:05:17.906903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.186 [2024-06-10 19:05:17.908327] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:03.186 [2024-06-10 19:05:17.908361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:03.186 [2024-06-10 19:05:17.908371] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:03.186 [2024-06-10 19:05:17.908381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:03.186 [2024-06-10 19:05:17.908389] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:03.186 [2024-06-10 19:05:17.908399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.186 19:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.445 19:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.445 "name": "Existed_Raid", 00:21:03.445 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:03.445 "strip_size_kb": 64, 00:21:03.445 "state": "configuring", 00:21:03.445 "raid_level": "concat", 00:21:03.445 "superblock": true, 00:21:03.445 "num_base_bdevs": 4, 00:21:03.445 "num_base_bdevs_discovered": 1, 00:21:03.445 "num_base_bdevs_operational": 4, 00:21:03.445 "base_bdevs_list": [ 00:21:03.445 { 00:21:03.445 "name": "BaseBdev1", 00:21:03.445 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:03.445 "is_configured": true, 00:21:03.445 "data_offset": 2048, 00:21:03.445 "data_size": 63488 00:21:03.445 }, 00:21:03.445 { 00:21:03.445 "name": "BaseBdev2", 00:21:03.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.445 "is_configured": false, 00:21:03.445 "data_offset": 0, 00:21:03.445 "data_size": 0 00:21:03.445 }, 00:21:03.445 { 00:21:03.445 "name": "BaseBdev3", 00:21:03.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.445 "is_configured": false, 00:21:03.445 "data_offset": 0, 00:21:03.445 "data_size": 0 00:21:03.445 }, 00:21:03.445 { 00:21:03.445 "name": "BaseBdev4", 00:21:03.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.445 "is_configured": false, 00:21:03.445 "data_offset": 0, 00:21:03.445 "data_size": 0 00:21:03.445 } 00:21:03.445 ] 00:21:03.445 }' 00:21:03.445 19:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.445 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.013 19:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:04.273 [2024-06-10 19:05:18.940932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.273 BaseBdev2 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:04.273 19:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.532 19:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:04.792 [ 00:21:04.792 { 00:21:04.792 "name": "BaseBdev2", 00:21:04.792 "aliases": [ 00:21:04.792 "02363a9b-00ff-4339-9a01-ba40ec3b4c89" 00:21:04.792 ], 00:21:04.792 "product_name": "Malloc disk", 00:21:04.792 "block_size": 512, 00:21:04.792 "num_blocks": 65536, 00:21:04.792 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:04.792 "assigned_rate_limits": { 00:21:04.792 "rw_ios_per_sec": 0, 00:21:04.792 "rw_mbytes_per_sec": 0, 00:21:04.792 "r_mbytes_per_sec": 0, 00:21:04.792 "w_mbytes_per_sec": 0 00:21:04.792 }, 00:21:04.792 "claimed": true, 00:21:04.792 "claim_type": "exclusive_write", 00:21:04.792 "zoned": false, 00:21:04.792 "supported_io_types": { 00:21:04.792 "read": true, 00:21:04.792 "write": true, 00:21:04.792 "unmap": true, 00:21:04.792 "write_zeroes": true, 00:21:04.792 "flush": true, 00:21:04.792 "reset": true, 00:21:04.792 "compare": false, 00:21:04.792 "compare_and_write": false, 00:21:04.792 "abort": true, 00:21:04.792 "nvme_admin": false, 00:21:04.792 "nvme_io": false 00:21:04.792 }, 00:21:04.792 "memory_domains": [ 00:21:04.792 { 00:21:04.792 "dma_device_id": "system", 00:21:04.792 "dma_device_type": 1 00:21:04.792 }, 00:21:04.792 { 00:21:04.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.792 "dma_device_type": 2 00:21:04.792 } 00:21:04.792 ], 00:21:04.792 "driver_specific": {} 00:21:04.792 } 00:21:04.792 ] 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.792 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.051 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.051 "name": "Existed_Raid", 00:21:05.051 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:05.051 "strip_size_kb": 64, 00:21:05.051 "state": "configuring", 00:21:05.051 "raid_level": "concat", 00:21:05.051 "superblock": true, 00:21:05.051 "num_base_bdevs": 4, 00:21:05.051 "num_base_bdevs_discovered": 2, 00:21:05.051 "num_base_bdevs_operational": 4, 00:21:05.051 "base_bdevs_list": [ 00:21:05.051 { 00:21:05.051 "name": "BaseBdev1", 00:21:05.051 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:05.051 "is_configured": true, 00:21:05.051 "data_offset": 2048, 00:21:05.051 "data_size": 63488 00:21:05.051 }, 00:21:05.052 { 00:21:05.052 "name": "BaseBdev2", 00:21:05.052 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:05.052 "is_configured": true, 00:21:05.052 "data_offset": 2048, 00:21:05.052 "data_size": 63488 00:21:05.052 }, 00:21:05.052 { 00:21:05.052 "name": "BaseBdev3", 00:21:05.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.052 "is_configured": false, 00:21:05.052 "data_offset": 0, 00:21:05.052 "data_size": 0 00:21:05.052 }, 00:21:05.052 { 00:21:05.052 "name": "BaseBdev4", 00:21:05.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.052 "is_configured": false, 00:21:05.052 "data_offset": 0, 00:21:05.052 "data_size": 0 00:21:05.052 } 00:21:05.052 ] 00:21:05.052 }' 00:21:05.052 19:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.052 19:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.620 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:05.880 [2024-06-10 19:05:20.436248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.880 BaseBdev3 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:05.880 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:06.139 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:06.139 [ 00:21:06.139 { 00:21:06.139 "name": "BaseBdev3", 00:21:06.139 "aliases": [ 00:21:06.139 "0db6f5b2-7b02-403a-bf5c-9395dadecbc1" 00:21:06.139 ], 00:21:06.139 "product_name": "Malloc disk", 00:21:06.139 "block_size": 512, 00:21:06.139 "num_blocks": 65536, 00:21:06.139 "uuid": "0db6f5b2-7b02-403a-bf5c-9395dadecbc1", 00:21:06.139 "assigned_rate_limits": { 00:21:06.139 "rw_ios_per_sec": 0, 00:21:06.139 "rw_mbytes_per_sec": 0, 00:21:06.139 "r_mbytes_per_sec": 0, 00:21:06.139 "w_mbytes_per_sec": 0 00:21:06.139 }, 00:21:06.139 "claimed": true, 00:21:06.139 "claim_type": "exclusive_write", 00:21:06.139 "zoned": false, 00:21:06.139 "supported_io_types": { 00:21:06.139 "read": true, 00:21:06.139 "write": true, 00:21:06.139 "unmap": true, 00:21:06.139 "write_zeroes": true, 00:21:06.139 "flush": true, 00:21:06.139 "reset": true, 00:21:06.139 "compare": false, 00:21:06.139 "compare_and_write": false, 00:21:06.139 "abort": true, 00:21:06.139 "nvme_admin": false, 00:21:06.139 "nvme_io": false 00:21:06.139 }, 00:21:06.139 "memory_domains": [ 00:21:06.139 { 00:21:06.139 "dma_device_id": "system", 00:21:06.139 "dma_device_type": 1 00:21:06.139 }, 00:21:06.139 { 00:21:06.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.139 "dma_device_type": 2 00:21:06.139 } 00:21:06.139 ], 00:21:06.139 "driver_specific": {} 00:21:06.139 } 00:21:06.139 ] 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.399 19:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.399 19:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.399 "name": "Existed_Raid", 00:21:06.399 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:06.399 "strip_size_kb": 64, 00:21:06.399 "state": "configuring", 00:21:06.399 "raid_level": "concat", 00:21:06.399 "superblock": true, 00:21:06.399 "num_base_bdevs": 4, 00:21:06.399 "num_base_bdevs_discovered": 3, 00:21:06.399 "num_base_bdevs_operational": 4, 00:21:06.399 "base_bdevs_list": [ 00:21:06.399 { 00:21:06.399 "name": "BaseBdev1", 00:21:06.399 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:06.399 "is_configured": true, 00:21:06.399 "data_offset": 2048, 00:21:06.399 "data_size": 63488 00:21:06.399 }, 00:21:06.399 { 00:21:06.399 "name": "BaseBdev2", 00:21:06.399 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:06.399 "is_configured": true, 00:21:06.399 "data_offset": 2048, 00:21:06.399 "data_size": 63488 00:21:06.399 }, 00:21:06.399 { 00:21:06.399 "name": "BaseBdev3", 00:21:06.399 "uuid": "0db6f5b2-7b02-403a-bf5c-9395dadecbc1", 00:21:06.399 "is_configured": true, 00:21:06.399 "data_offset": 2048, 00:21:06.399 "data_size": 63488 00:21:06.399 }, 00:21:06.399 { 00:21:06.399 "name": "BaseBdev4", 00:21:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.399 "is_configured": false, 00:21:06.399 "data_offset": 0, 00:21:06.399 "data_size": 0 00:21:06.399 } 00:21:06.399 ] 00:21:06.399 }' 00:21:06.399 19:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.399 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.967 19:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:07.227 [2024-06-10 19:05:21.807222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:07.227 [2024-06-10 19:05:21.807384] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x241c820 00:21:07.227 [2024-06-10 19:05:21.807397] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:07.227 [2024-06-10 19:05:21.807556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x241d470 00:21:07.227 [2024-06-10 19:05:21.807707] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x241c820 00:21:07.227 [2024-06-10 19:05:21.807722] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x241c820 00:21:07.227 [2024-06-10 19:05:21.807820] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.227 BaseBdev4 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:07.227 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.486 19:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:07.486 [ 00:21:07.486 { 00:21:07.486 "name": "BaseBdev4", 00:21:07.486 "aliases": [ 00:21:07.486 "36aed1db-1155-42b7-ad2f-1084e39394e2" 00:21:07.486 ], 00:21:07.486 "product_name": "Malloc disk", 00:21:07.486 "block_size": 512, 00:21:07.486 "num_blocks": 65536, 00:21:07.486 "uuid": "36aed1db-1155-42b7-ad2f-1084e39394e2", 00:21:07.486 "assigned_rate_limits": { 00:21:07.486 "rw_ios_per_sec": 0, 00:21:07.486 "rw_mbytes_per_sec": 0, 00:21:07.486 "r_mbytes_per_sec": 0, 00:21:07.486 "w_mbytes_per_sec": 0 00:21:07.486 }, 00:21:07.486 "claimed": true, 00:21:07.486 "claim_type": "exclusive_write", 00:21:07.486 "zoned": false, 00:21:07.486 "supported_io_types": { 00:21:07.486 "read": true, 00:21:07.486 "write": true, 00:21:07.486 "unmap": true, 00:21:07.486 "write_zeroes": true, 00:21:07.486 "flush": true, 00:21:07.486 "reset": true, 00:21:07.486 "compare": false, 00:21:07.486 "compare_and_write": false, 00:21:07.486 "abort": true, 00:21:07.486 "nvme_admin": false, 00:21:07.486 "nvme_io": false 00:21:07.486 }, 00:21:07.486 "memory_domains": [ 00:21:07.486 { 00:21:07.486 "dma_device_id": "system", 00:21:07.486 "dma_device_type": 1 00:21:07.486 }, 00:21:07.486 { 00:21:07.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.486 "dma_device_type": 2 00:21:07.486 } 00:21:07.486 ], 00:21:07.486 "driver_specific": {} 00:21:07.486 } 00:21:07.486 ] 00:21:07.486 19:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:07.486 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:07.486 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:07.486 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.487 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.746 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.746 "name": "Existed_Raid", 00:21:07.746 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:07.746 "strip_size_kb": 64, 00:21:07.746 "state": "online", 00:21:07.746 "raid_level": "concat", 00:21:07.746 "superblock": true, 00:21:07.746 "num_base_bdevs": 4, 00:21:07.746 "num_base_bdevs_discovered": 4, 00:21:07.746 "num_base_bdevs_operational": 4, 00:21:07.746 "base_bdevs_list": [ 00:21:07.746 { 00:21:07.746 "name": "BaseBdev1", 00:21:07.746 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:07.746 "is_configured": true, 00:21:07.746 "data_offset": 2048, 00:21:07.746 "data_size": 63488 00:21:07.746 }, 00:21:07.746 { 00:21:07.746 "name": "BaseBdev2", 00:21:07.746 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:07.746 "is_configured": true, 00:21:07.746 "data_offset": 2048, 00:21:07.746 "data_size": 63488 00:21:07.746 }, 00:21:07.746 { 00:21:07.746 "name": "BaseBdev3", 00:21:07.746 "uuid": "0db6f5b2-7b02-403a-bf5c-9395dadecbc1", 00:21:07.746 "is_configured": true, 00:21:07.746 "data_offset": 2048, 00:21:07.746 "data_size": 63488 00:21:07.746 }, 00:21:07.746 { 00:21:07.746 "name": "BaseBdev4", 00:21:07.746 "uuid": "36aed1db-1155-42b7-ad2f-1084e39394e2", 00:21:07.746 "is_configured": true, 00:21:07.746 "data_offset": 2048, 00:21:07.746 "data_size": 63488 00:21:07.746 } 00:21:07.746 ] 00:21:07.746 }' 00:21:07.746 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.746 19:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:08.313 19:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:08.571 [2024-06-10 19:05:23.199183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.571 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:08.571 "name": "Existed_Raid", 00:21:08.571 "aliases": [ 00:21:08.571 "04f7b99e-4770-46aa-b7d5-564650189425" 00:21:08.571 ], 00:21:08.571 "product_name": "Raid Volume", 00:21:08.571 "block_size": 512, 00:21:08.571 "num_blocks": 253952, 00:21:08.571 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:08.571 "assigned_rate_limits": { 00:21:08.571 "rw_ios_per_sec": 0, 00:21:08.571 "rw_mbytes_per_sec": 0, 00:21:08.571 "r_mbytes_per_sec": 0, 00:21:08.571 "w_mbytes_per_sec": 0 00:21:08.571 }, 00:21:08.571 "claimed": false, 00:21:08.571 "zoned": false, 00:21:08.571 "supported_io_types": { 00:21:08.571 "read": true, 00:21:08.571 "write": true, 00:21:08.571 "unmap": true, 00:21:08.571 "write_zeroes": true, 00:21:08.571 "flush": true, 00:21:08.571 "reset": true, 00:21:08.571 "compare": false, 00:21:08.571 "compare_and_write": false, 00:21:08.571 "abort": false, 00:21:08.571 "nvme_admin": false, 00:21:08.571 "nvme_io": false 00:21:08.571 }, 00:21:08.571 "memory_domains": [ 00:21:08.571 { 00:21:08.571 "dma_device_id": "system", 00:21:08.571 "dma_device_type": 1 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.571 "dma_device_type": 2 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "system", 00:21:08.571 "dma_device_type": 1 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.571 "dma_device_type": 2 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "system", 00:21:08.571 "dma_device_type": 1 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.571 "dma_device_type": 2 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "system", 00:21:08.571 "dma_device_type": 1 00:21:08.571 }, 00:21:08.571 { 00:21:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.571 "dma_device_type": 2 00:21:08.571 } 00:21:08.571 ], 00:21:08.571 "driver_specific": { 00:21:08.572 "raid": { 00:21:08.572 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:08.572 "strip_size_kb": 64, 00:21:08.572 "state": "online", 00:21:08.572 "raid_level": "concat", 00:21:08.572 "superblock": true, 00:21:08.572 "num_base_bdevs": 4, 00:21:08.572 "num_base_bdevs_discovered": 4, 00:21:08.572 "num_base_bdevs_operational": 4, 00:21:08.572 "base_bdevs_list": [ 00:21:08.572 { 00:21:08.572 "name": "BaseBdev1", 00:21:08.572 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:08.572 "is_configured": true, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 }, 00:21:08.572 { 00:21:08.572 "name": "BaseBdev2", 00:21:08.572 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:08.572 "is_configured": true, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 }, 00:21:08.572 { 00:21:08.572 "name": "BaseBdev3", 00:21:08.572 "uuid": "0db6f5b2-7b02-403a-bf5c-9395dadecbc1", 00:21:08.572 "is_configured": true, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 }, 00:21:08.572 { 00:21:08.572 "name": "BaseBdev4", 00:21:08.572 "uuid": "36aed1db-1155-42b7-ad2f-1084e39394e2", 00:21:08.572 "is_configured": true, 00:21:08.572 "data_offset": 2048, 00:21:08.572 "data_size": 63488 00:21:08.572 } 00:21:08.572 ] 00:21:08.572 } 00:21:08.572 } 00:21:08.572 }' 00:21:08.572 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:08.572 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:08.572 BaseBdev2 00:21:08.572 BaseBdev3 00:21:08.572 BaseBdev4' 00:21:08.572 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:08.572 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:08.572 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.831 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.831 "name": "BaseBdev1", 00:21:08.831 "aliases": [ 00:21:08.831 "edba5fc2-b253-4b93-b5bc-90b504388164" 00:21:08.831 ], 00:21:08.831 "product_name": "Malloc disk", 00:21:08.831 "block_size": 512, 00:21:08.831 "num_blocks": 65536, 00:21:08.831 "uuid": "edba5fc2-b253-4b93-b5bc-90b504388164", 00:21:08.831 "assigned_rate_limits": { 00:21:08.831 "rw_ios_per_sec": 0, 00:21:08.831 "rw_mbytes_per_sec": 0, 00:21:08.831 "r_mbytes_per_sec": 0, 00:21:08.831 "w_mbytes_per_sec": 0 00:21:08.831 }, 00:21:08.831 "claimed": true, 00:21:08.831 "claim_type": "exclusive_write", 00:21:08.831 "zoned": false, 00:21:08.831 "supported_io_types": { 00:21:08.831 "read": true, 00:21:08.831 "write": true, 00:21:08.831 "unmap": true, 00:21:08.831 "write_zeroes": true, 00:21:08.831 "flush": true, 00:21:08.831 "reset": true, 00:21:08.831 "compare": false, 00:21:08.831 "compare_and_write": false, 00:21:08.831 "abort": true, 00:21:08.831 "nvme_admin": false, 00:21:08.831 "nvme_io": false 00:21:08.831 }, 00:21:08.831 "memory_domains": [ 00:21:08.831 { 00:21:08.831 "dma_device_id": "system", 00:21:08.831 "dma_device_type": 1 00:21:08.831 }, 00:21:08.831 { 00:21:08.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.831 "dma_device_type": 2 00:21:08.831 } 00:21:08.831 ], 00:21:08.831 "driver_specific": {} 00:21:08.831 }' 00:21:08.831 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.831 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.831 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:08.831 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:09.090 19:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:09.349 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:09.349 "name": "BaseBdev2", 00:21:09.349 "aliases": [ 00:21:09.349 "02363a9b-00ff-4339-9a01-ba40ec3b4c89" 00:21:09.349 ], 00:21:09.349 "product_name": "Malloc disk", 00:21:09.349 "block_size": 512, 00:21:09.349 "num_blocks": 65536, 00:21:09.349 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:09.349 "assigned_rate_limits": { 00:21:09.349 "rw_ios_per_sec": 0, 00:21:09.349 "rw_mbytes_per_sec": 0, 00:21:09.349 "r_mbytes_per_sec": 0, 00:21:09.349 "w_mbytes_per_sec": 0 00:21:09.349 }, 00:21:09.349 "claimed": true, 00:21:09.349 "claim_type": "exclusive_write", 00:21:09.349 "zoned": false, 00:21:09.349 "supported_io_types": { 00:21:09.349 "read": true, 00:21:09.349 "write": true, 00:21:09.349 "unmap": true, 00:21:09.349 "write_zeroes": true, 00:21:09.349 "flush": true, 00:21:09.349 "reset": true, 00:21:09.349 "compare": false, 00:21:09.349 "compare_and_write": false, 00:21:09.349 "abort": true, 00:21:09.349 "nvme_admin": false, 00:21:09.349 "nvme_io": false 00:21:09.349 }, 00:21:09.349 "memory_domains": [ 00:21:09.349 { 00:21:09.349 "dma_device_id": "system", 00:21:09.349 "dma_device_type": 1 00:21:09.349 }, 00:21:09.349 { 00:21:09.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.349 "dma_device_type": 2 00:21:09.349 } 00:21:09.349 ], 00:21:09.349 "driver_specific": {} 00:21:09.349 }' 00:21:09.349 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.608 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.867 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.867 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:09.867 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:09.867 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:09.867 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:09.867 "name": "BaseBdev3", 00:21:09.867 "aliases": [ 00:21:09.867 "0db6f5b2-7b02-403a-bf5c-9395dadecbc1" 00:21:09.867 ], 00:21:09.867 "product_name": "Malloc disk", 00:21:09.867 "block_size": 512, 00:21:09.867 "num_blocks": 65536, 00:21:09.867 "uuid": "0db6f5b2-7b02-403a-bf5c-9395dadecbc1", 00:21:09.867 "assigned_rate_limits": { 00:21:09.867 "rw_ios_per_sec": 0, 00:21:09.867 "rw_mbytes_per_sec": 0, 00:21:09.867 "r_mbytes_per_sec": 0, 00:21:09.867 "w_mbytes_per_sec": 0 00:21:09.867 }, 00:21:09.867 "claimed": true, 00:21:09.867 "claim_type": "exclusive_write", 00:21:09.867 "zoned": false, 00:21:09.867 "supported_io_types": { 00:21:09.867 "read": true, 00:21:09.867 "write": true, 00:21:09.867 "unmap": true, 00:21:09.867 "write_zeroes": true, 00:21:09.867 "flush": true, 00:21:09.867 "reset": true, 00:21:09.867 "compare": false, 00:21:09.867 "compare_and_write": false, 00:21:09.867 "abort": true, 00:21:09.867 "nvme_admin": false, 00:21:09.867 "nvme_io": false 00:21:09.867 }, 00:21:09.867 "memory_domains": [ 00:21:09.867 { 00:21:09.867 "dma_device_id": "system", 00:21:09.867 "dma_device_type": 1 00:21:09.867 }, 00:21:09.867 { 00:21:09.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.867 "dma_device_type": 2 00:21:09.867 } 00:21:09.867 ], 00:21:09.867 "driver_specific": {} 00:21:09.867 }' 00:21:09.867 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.126 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.385 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.385 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.385 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.385 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:10.385 19:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.644 "name": "BaseBdev4", 00:21:10.644 "aliases": [ 00:21:10.644 "36aed1db-1155-42b7-ad2f-1084e39394e2" 00:21:10.644 ], 00:21:10.644 "product_name": "Malloc disk", 00:21:10.644 "block_size": 512, 00:21:10.644 "num_blocks": 65536, 00:21:10.644 "uuid": "36aed1db-1155-42b7-ad2f-1084e39394e2", 00:21:10.644 "assigned_rate_limits": { 00:21:10.644 "rw_ios_per_sec": 0, 00:21:10.644 "rw_mbytes_per_sec": 0, 00:21:10.644 "r_mbytes_per_sec": 0, 00:21:10.644 "w_mbytes_per_sec": 0 00:21:10.644 }, 00:21:10.644 "claimed": true, 00:21:10.644 "claim_type": "exclusive_write", 00:21:10.644 "zoned": false, 00:21:10.644 "supported_io_types": { 00:21:10.644 "read": true, 00:21:10.644 "write": true, 00:21:10.644 "unmap": true, 00:21:10.644 "write_zeroes": true, 00:21:10.644 "flush": true, 00:21:10.644 "reset": true, 00:21:10.644 "compare": false, 00:21:10.644 "compare_and_write": false, 00:21:10.644 "abort": true, 00:21:10.644 "nvme_admin": false, 00:21:10.644 "nvme_io": false 00:21:10.644 }, 00:21:10.644 "memory_domains": [ 00:21:10.644 { 00:21:10.644 "dma_device_id": "system", 00:21:10.644 "dma_device_type": 1 00:21:10.644 }, 00:21:10.644 { 00:21:10.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.644 "dma_device_type": 2 00:21:10.644 } 00:21:10.644 ], 00:21:10.644 "driver_specific": {} 00:21:10.644 }' 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.644 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.903 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.903 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.903 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.903 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.903 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.903 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:11.163 [2024-06-10 19:05:25.729752] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.163 [2024-06-10 19:05:25.729777] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.163 [2024-06-10 19:05:25.729819] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.163 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.422 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.422 "name": "Existed_Raid", 00:21:11.422 "uuid": "04f7b99e-4770-46aa-b7d5-564650189425", 00:21:11.422 "strip_size_kb": 64, 00:21:11.422 "state": "offline", 00:21:11.422 "raid_level": "concat", 00:21:11.422 "superblock": true, 00:21:11.422 "num_base_bdevs": 4, 00:21:11.422 "num_base_bdevs_discovered": 3, 00:21:11.422 "num_base_bdevs_operational": 3, 00:21:11.422 "base_bdevs_list": [ 00:21:11.422 { 00:21:11.422 "name": null, 00:21:11.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.422 "is_configured": false, 00:21:11.422 "data_offset": 2048, 00:21:11.422 "data_size": 63488 00:21:11.422 }, 00:21:11.422 { 00:21:11.422 "name": "BaseBdev2", 00:21:11.422 "uuid": "02363a9b-00ff-4339-9a01-ba40ec3b4c89", 00:21:11.422 "is_configured": true, 00:21:11.422 "data_offset": 2048, 00:21:11.422 "data_size": 63488 00:21:11.422 }, 00:21:11.422 { 00:21:11.422 "name": "BaseBdev3", 00:21:11.422 "uuid": "0db6f5b2-7b02-403a-bf5c-9395dadecbc1", 00:21:11.422 "is_configured": true, 00:21:11.422 "data_offset": 2048, 00:21:11.422 "data_size": 63488 00:21:11.422 }, 00:21:11.422 { 00:21:11.422 "name": "BaseBdev4", 00:21:11.422 "uuid": "36aed1db-1155-42b7-ad2f-1084e39394e2", 00:21:11.422 "is_configured": true, 00:21:11.422 "data_offset": 2048, 00:21:11.422 "data_size": 63488 00:21:11.422 } 00:21:11.422 ] 00:21:11.422 }' 00:21:11.422 19:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.422 19:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.990 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:11.990 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:11.990 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.990 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:12.249 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:12.249 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.249 19:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:12.249 [2024-06-10 19:05:26.998070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.508 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:12.768 [2024-06-10 19:05:27.465333] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.768 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.768 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.768 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.768 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:13.027 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:13.027 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:13.027 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:13.286 [2024-06-10 19:05:27.920536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:13.286 [2024-06-10 19:05:27.920587] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x241c820 name Existed_Raid, state offline 00:21:13.286 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:13.286 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:13.286 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.286 19:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:13.546 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:13.546 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:13.546 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:13.546 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:13.546 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:13.546 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.805 BaseBdev2 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:13.805 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.065 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:14.324 [ 00:21:14.324 { 00:21:14.324 "name": "BaseBdev2", 00:21:14.324 "aliases": [ 00:21:14.324 "3272fe12-3289-4990-a5f4-adc54c66ca5c" 00:21:14.324 ], 00:21:14.324 "product_name": "Malloc disk", 00:21:14.324 "block_size": 512, 00:21:14.324 "num_blocks": 65536, 00:21:14.324 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:14.324 "assigned_rate_limits": { 00:21:14.324 "rw_ios_per_sec": 0, 00:21:14.324 "rw_mbytes_per_sec": 0, 00:21:14.324 "r_mbytes_per_sec": 0, 00:21:14.324 "w_mbytes_per_sec": 0 00:21:14.324 }, 00:21:14.324 "claimed": false, 00:21:14.324 "zoned": false, 00:21:14.324 "supported_io_types": { 00:21:14.324 "read": true, 00:21:14.324 "write": true, 00:21:14.324 "unmap": true, 00:21:14.324 "write_zeroes": true, 00:21:14.324 "flush": true, 00:21:14.324 "reset": true, 00:21:14.324 "compare": false, 00:21:14.324 "compare_and_write": false, 00:21:14.324 "abort": true, 00:21:14.324 "nvme_admin": false, 00:21:14.324 "nvme_io": false 00:21:14.324 }, 00:21:14.324 "memory_domains": [ 00:21:14.324 { 00:21:14.324 "dma_device_id": "system", 00:21:14.324 "dma_device_type": 1 00:21:14.324 }, 00:21:14.324 { 00:21:14.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.324 "dma_device_type": 2 00:21:14.324 } 00:21:14.324 ], 00:21:14.324 "driver_specific": {} 00:21:14.324 } 00:21:14.324 ] 00:21:14.324 19:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:14.325 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:14.325 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:14.325 19:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.325 BaseBdev3 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.584 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:14.843 [ 00:21:14.843 { 00:21:14.843 "name": "BaseBdev3", 00:21:14.843 "aliases": [ 00:21:14.843 "978e2b7b-f2a7-4316-b140-3b622a1bb8b3" 00:21:14.843 ], 00:21:14.843 "product_name": "Malloc disk", 00:21:14.843 "block_size": 512, 00:21:14.843 "num_blocks": 65536, 00:21:14.843 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:14.843 "assigned_rate_limits": { 00:21:14.843 "rw_ios_per_sec": 0, 00:21:14.843 "rw_mbytes_per_sec": 0, 00:21:14.843 "r_mbytes_per_sec": 0, 00:21:14.843 "w_mbytes_per_sec": 0 00:21:14.843 }, 00:21:14.843 "claimed": false, 00:21:14.843 "zoned": false, 00:21:14.843 "supported_io_types": { 00:21:14.843 "read": true, 00:21:14.843 "write": true, 00:21:14.843 "unmap": true, 00:21:14.843 "write_zeroes": true, 00:21:14.843 "flush": true, 00:21:14.843 "reset": true, 00:21:14.843 "compare": false, 00:21:14.843 "compare_and_write": false, 00:21:14.843 "abort": true, 00:21:14.843 "nvme_admin": false, 00:21:14.843 "nvme_io": false 00:21:14.843 }, 00:21:14.843 "memory_domains": [ 00:21:14.843 { 00:21:14.843 "dma_device_id": "system", 00:21:14.843 "dma_device_type": 1 00:21:14.843 }, 00:21:14.843 { 00:21:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.843 "dma_device_type": 2 00:21:14.843 } 00:21:14.843 ], 00:21:14.843 "driver_specific": {} 00:21:14.843 } 00:21:14.843 ] 00:21:14.843 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:14.843 19:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:14.843 19:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:14.843 19:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:15.102 BaseBdev4 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:15.102 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:15.362 19:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:15.621 [ 00:21:15.621 { 00:21:15.621 "name": "BaseBdev4", 00:21:15.621 "aliases": [ 00:21:15.621 "c3a4df03-7ee4-409d-8760-3c60359f9dce" 00:21:15.621 ], 00:21:15.621 "product_name": "Malloc disk", 00:21:15.621 "block_size": 512, 00:21:15.621 "num_blocks": 65536, 00:21:15.621 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:15.621 "assigned_rate_limits": { 00:21:15.621 "rw_ios_per_sec": 0, 00:21:15.621 "rw_mbytes_per_sec": 0, 00:21:15.621 "r_mbytes_per_sec": 0, 00:21:15.621 "w_mbytes_per_sec": 0 00:21:15.621 }, 00:21:15.621 "claimed": false, 00:21:15.621 "zoned": false, 00:21:15.621 "supported_io_types": { 00:21:15.621 "read": true, 00:21:15.621 "write": true, 00:21:15.621 "unmap": true, 00:21:15.621 "write_zeroes": true, 00:21:15.621 "flush": true, 00:21:15.621 "reset": true, 00:21:15.622 "compare": false, 00:21:15.622 "compare_and_write": false, 00:21:15.622 "abort": true, 00:21:15.622 "nvme_admin": false, 00:21:15.622 "nvme_io": false 00:21:15.622 }, 00:21:15.622 "memory_domains": [ 00:21:15.622 { 00:21:15.622 "dma_device_id": "system", 00:21:15.622 "dma_device_type": 1 00:21:15.622 }, 00:21:15.622 { 00:21:15.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.622 "dma_device_type": 2 00:21:15.622 } 00:21:15.622 ], 00:21:15.622 "driver_specific": {} 00:21:15.622 } 00:21:15.622 ] 00:21:15.622 19:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:15.622 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:15.622 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:15.622 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:15.881 [2024-06-10 19:05:30.422397] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.881 [2024-06-10 19:05:30.422437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.881 [2024-06-10 19:05:30.422456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.881 [2024-06-10 19:05:30.423736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:15.881 [2024-06-10 19:05:30.423786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.881 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.141 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.141 "name": "Existed_Raid", 00:21:16.141 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:16.141 "strip_size_kb": 64, 00:21:16.141 "state": "configuring", 00:21:16.141 "raid_level": "concat", 00:21:16.141 "superblock": true, 00:21:16.141 "num_base_bdevs": 4, 00:21:16.141 "num_base_bdevs_discovered": 3, 00:21:16.141 "num_base_bdevs_operational": 4, 00:21:16.141 "base_bdevs_list": [ 00:21:16.141 { 00:21:16.141 "name": "BaseBdev1", 00:21:16.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.141 "is_configured": false, 00:21:16.141 "data_offset": 0, 00:21:16.141 "data_size": 0 00:21:16.141 }, 00:21:16.141 { 00:21:16.141 "name": "BaseBdev2", 00:21:16.141 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:16.141 "is_configured": true, 00:21:16.141 "data_offset": 2048, 00:21:16.141 "data_size": 63488 00:21:16.141 }, 00:21:16.141 { 00:21:16.141 "name": "BaseBdev3", 00:21:16.141 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:16.141 "is_configured": true, 00:21:16.141 "data_offset": 2048, 00:21:16.141 "data_size": 63488 00:21:16.141 }, 00:21:16.141 { 00:21:16.141 "name": "BaseBdev4", 00:21:16.141 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:16.141 "is_configured": true, 00:21:16.141 "data_offset": 2048, 00:21:16.141 "data_size": 63488 00:21:16.141 } 00:21:16.141 ] 00:21:16.141 }' 00:21:16.141 19:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.141 19:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:16.710 [2024-06-10 19:05:31.428989] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.710 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.969 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.969 "name": "Existed_Raid", 00:21:16.969 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:16.969 "strip_size_kb": 64, 00:21:16.969 "state": "configuring", 00:21:16.969 "raid_level": "concat", 00:21:16.969 "superblock": true, 00:21:16.969 "num_base_bdevs": 4, 00:21:16.969 "num_base_bdevs_discovered": 2, 00:21:16.969 "num_base_bdevs_operational": 4, 00:21:16.969 "base_bdevs_list": [ 00:21:16.969 { 00:21:16.969 "name": "BaseBdev1", 00:21:16.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.969 "is_configured": false, 00:21:16.969 "data_offset": 0, 00:21:16.969 "data_size": 0 00:21:16.969 }, 00:21:16.969 { 00:21:16.969 "name": null, 00:21:16.969 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:16.969 "is_configured": false, 00:21:16.969 "data_offset": 2048, 00:21:16.969 "data_size": 63488 00:21:16.969 }, 00:21:16.969 { 00:21:16.969 "name": "BaseBdev3", 00:21:16.969 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:16.969 "is_configured": true, 00:21:16.969 "data_offset": 2048, 00:21:16.969 "data_size": 63488 00:21:16.969 }, 00:21:16.969 { 00:21:16.969 "name": "BaseBdev4", 00:21:16.970 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:16.970 "is_configured": true, 00:21:16.970 "data_offset": 2048, 00:21:16.970 "data_size": 63488 00:21:16.970 } 00:21:16.970 ] 00:21:16.970 }' 00:21:16.970 19:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.970 19:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.537 19:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.537 19:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:17.796 19:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:17.796 19:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:18.097 [2024-06-10 19:05:32.711669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.097 BaseBdev1 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:18.097 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.382 19:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:18.642 [ 00:21:18.642 { 00:21:18.642 "name": "BaseBdev1", 00:21:18.642 "aliases": [ 00:21:18.642 "37a97d5d-0869-434c-8b7d-05067d35abbb" 00:21:18.642 ], 00:21:18.642 "product_name": "Malloc disk", 00:21:18.642 "block_size": 512, 00:21:18.642 "num_blocks": 65536, 00:21:18.642 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:18.642 "assigned_rate_limits": { 00:21:18.642 "rw_ios_per_sec": 0, 00:21:18.642 "rw_mbytes_per_sec": 0, 00:21:18.642 "r_mbytes_per_sec": 0, 00:21:18.642 "w_mbytes_per_sec": 0 00:21:18.642 }, 00:21:18.642 "claimed": true, 00:21:18.642 "claim_type": "exclusive_write", 00:21:18.642 "zoned": false, 00:21:18.642 "supported_io_types": { 00:21:18.642 "read": true, 00:21:18.642 "write": true, 00:21:18.642 "unmap": true, 00:21:18.642 "write_zeroes": true, 00:21:18.642 "flush": true, 00:21:18.642 "reset": true, 00:21:18.642 "compare": false, 00:21:18.642 "compare_and_write": false, 00:21:18.642 "abort": true, 00:21:18.642 "nvme_admin": false, 00:21:18.642 "nvme_io": false 00:21:18.642 }, 00:21:18.642 "memory_domains": [ 00:21:18.642 { 00:21:18.642 "dma_device_id": "system", 00:21:18.642 "dma_device_type": 1 00:21:18.642 }, 00:21:18.642 { 00:21:18.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.642 "dma_device_type": 2 00:21:18.642 } 00:21:18.642 ], 00:21:18.642 "driver_specific": {} 00:21:18.642 } 00:21:18.642 ] 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.642 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.904 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.904 "name": "Existed_Raid", 00:21:18.904 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:18.904 "strip_size_kb": 64, 00:21:18.904 "state": "configuring", 00:21:18.904 "raid_level": "concat", 00:21:18.904 "superblock": true, 00:21:18.904 "num_base_bdevs": 4, 00:21:18.904 "num_base_bdevs_discovered": 3, 00:21:18.904 "num_base_bdevs_operational": 4, 00:21:18.904 "base_bdevs_list": [ 00:21:18.904 { 00:21:18.904 "name": "BaseBdev1", 00:21:18.904 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:18.904 "is_configured": true, 00:21:18.904 "data_offset": 2048, 00:21:18.904 "data_size": 63488 00:21:18.904 }, 00:21:18.904 { 00:21:18.904 "name": null, 00:21:18.904 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:18.904 "is_configured": false, 00:21:18.904 "data_offset": 2048, 00:21:18.904 "data_size": 63488 00:21:18.904 }, 00:21:18.904 { 00:21:18.904 "name": "BaseBdev3", 00:21:18.904 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:18.904 "is_configured": true, 00:21:18.904 "data_offset": 2048, 00:21:18.904 "data_size": 63488 00:21:18.904 }, 00:21:18.904 { 00:21:18.904 "name": "BaseBdev4", 00:21:18.904 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:18.904 "is_configured": true, 00:21:18.904 "data_offset": 2048, 00:21:18.904 "data_size": 63488 00:21:18.904 } 00:21:18.904 ] 00:21:18.904 }' 00:21:18.904 19:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.904 19:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.473 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.473 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:19.732 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:19.732 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:19.732 [2024-06-10 19:05:34.476365] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.991 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.991 "name": "Existed_Raid", 00:21:19.991 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:19.991 "strip_size_kb": 64, 00:21:19.991 "state": "configuring", 00:21:19.991 "raid_level": "concat", 00:21:19.991 "superblock": true, 00:21:19.991 "num_base_bdevs": 4, 00:21:19.991 "num_base_bdevs_discovered": 2, 00:21:19.991 "num_base_bdevs_operational": 4, 00:21:19.991 "base_bdevs_list": [ 00:21:19.991 { 00:21:19.991 "name": "BaseBdev1", 00:21:19.991 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:19.991 "is_configured": true, 00:21:19.991 "data_offset": 2048, 00:21:19.991 "data_size": 63488 00:21:19.991 }, 00:21:19.991 { 00:21:19.991 "name": null, 00:21:19.991 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:19.991 "is_configured": false, 00:21:19.991 "data_offset": 2048, 00:21:19.991 "data_size": 63488 00:21:19.991 }, 00:21:19.991 { 00:21:19.991 "name": null, 00:21:19.991 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:19.991 "is_configured": false, 00:21:19.991 "data_offset": 2048, 00:21:19.991 "data_size": 63488 00:21:19.991 }, 00:21:19.991 { 00:21:19.991 "name": "BaseBdev4", 00:21:19.991 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:19.991 "is_configured": true, 00:21:19.991 "data_offset": 2048, 00:21:19.992 "data_size": 63488 00:21:19.992 } 00:21:19.992 ] 00:21:19.992 }' 00:21:19.992 19:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.992 19:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.560 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.560 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:20.819 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:20.819 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:21.078 [2024-06-10 19:05:35.743718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:21.078 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:21.078 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:21.078 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.078 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.079 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.338 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.338 "name": "Existed_Raid", 00:21:21.338 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:21.338 "strip_size_kb": 64, 00:21:21.338 "state": "configuring", 00:21:21.338 "raid_level": "concat", 00:21:21.338 "superblock": true, 00:21:21.338 "num_base_bdevs": 4, 00:21:21.338 "num_base_bdevs_discovered": 3, 00:21:21.338 "num_base_bdevs_operational": 4, 00:21:21.338 "base_bdevs_list": [ 00:21:21.338 { 00:21:21.338 "name": "BaseBdev1", 00:21:21.338 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:21.338 "is_configured": true, 00:21:21.338 "data_offset": 2048, 00:21:21.338 "data_size": 63488 00:21:21.338 }, 00:21:21.338 { 00:21:21.338 "name": null, 00:21:21.338 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:21.338 "is_configured": false, 00:21:21.338 "data_offset": 2048, 00:21:21.338 "data_size": 63488 00:21:21.338 }, 00:21:21.338 { 00:21:21.338 "name": "BaseBdev3", 00:21:21.338 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:21.338 "is_configured": true, 00:21:21.338 "data_offset": 2048, 00:21:21.338 "data_size": 63488 00:21:21.338 }, 00:21:21.338 { 00:21:21.338 "name": "BaseBdev4", 00:21:21.338 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:21.338 "is_configured": true, 00:21:21.338 "data_offset": 2048, 00:21:21.338 "data_size": 63488 00:21:21.338 } 00:21:21.338 ] 00:21:21.338 }' 00:21:21.338 19:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.338 19:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.906 19:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.906 19:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:22.188 19:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:22.188 19:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:22.447 [2024-06-10 19:05:36.999051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.447 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.707 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.707 "name": "Existed_Raid", 00:21:22.707 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:22.707 "strip_size_kb": 64, 00:21:22.707 "state": "configuring", 00:21:22.707 "raid_level": "concat", 00:21:22.707 "superblock": true, 00:21:22.707 "num_base_bdevs": 4, 00:21:22.707 "num_base_bdevs_discovered": 2, 00:21:22.707 "num_base_bdevs_operational": 4, 00:21:22.707 "base_bdevs_list": [ 00:21:22.707 { 00:21:22.707 "name": null, 00:21:22.707 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:22.707 "is_configured": false, 00:21:22.707 "data_offset": 2048, 00:21:22.707 "data_size": 63488 00:21:22.707 }, 00:21:22.707 { 00:21:22.707 "name": null, 00:21:22.707 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:22.707 "is_configured": false, 00:21:22.707 "data_offset": 2048, 00:21:22.707 "data_size": 63488 00:21:22.707 }, 00:21:22.707 { 00:21:22.707 "name": "BaseBdev3", 00:21:22.707 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:22.707 "is_configured": true, 00:21:22.707 "data_offset": 2048, 00:21:22.707 "data_size": 63488 00:21:22.707 }, 00:21:22.707 { 00:21:22.707 "name": "BaseBdev4", 00:21:22.707 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:22.707 "is_configured": true, 00:21:22.707 "data_offset": 2048, 00:21:22.707 "data_size": 63488 00:21:22.707 } 00:21:22.707 ] 00:21:22.707 }' 00:21:22.707 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.707 19:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.275 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.275 19:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:23.534 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:23.534 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:23.534 [2024-06-10 19:05:38.290959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.793 "name": "Existed_Raid", 00:21:23.793 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:23.793 "strip_size_kb": 64, 00:21:23.793 "state": "configuring", 00:21:23.793 "raid_level": "concat", 00:21:23.793 "superblock": true, 00:21:23.793 "num_base_bdevs": 4, 00:21:23.793 "num_base_bdevs_discovered": 3, 00:21:23.793 "num_base_bdevs_operational": 4, 00:21:23.793 "base_bdevs_list": [ 00:21:23.793 { 00:21:23.793 "name": null, 00:21:23.793 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:23.793 "is_configured": false, 00:21:23.793 "data_offset": 2048, 00:21:23.793 "data_size": 63488 00:21:23.793 }, 00:21:23.793 { 00:21:23.793 "name": "BaseBdev2", 00:21:23.793 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:23.793 "is_configured": true, 00:21:23.793 "data_offset": 2048, 00:21:23.793 "data_size": 63488 00:21:23.793 }, 00:21:23.793 { 00:21:23.793 "name": "BaseBdev3", 00:21:23.793 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:23.793 "is_configured": true, 00:21:23.793 "data_offset": 2048, 00:21:23.793 "data_size": 63488 00:21:23.793 }, 00:21:23.793 { 00:21:23.793 "name": "BaseBdev4", 00:21:23.793 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:23.793 "is_configured": true, 00:21:23.793 "data_offset": 2048, 00:21:23.793 "data_size": 63488 00:21:23.793 } 00:21:23.793 ] 00:21:23.793 }' 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.793 19:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.360 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.360 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:24.618 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:24.618 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.618 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:24.878 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 37a97d5d-0869-434c-8b7d-05067d35abbb 00:21:25.137 [2024-06-10 19:05:39.783397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:25.137 [2024-06-10 19:05:39.783558] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x241b000 00:21:25.137 [2024-06-10 19:05:39.783571] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:25.137 [2024-06-10 19:05:39.783753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2330f80 00:21:25.137 [2024-06-10 19:05:39.783879] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x241b000 00:21:25.137 [2024-06-10 19:05:39.783888] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x241b000 00:21:25.137 [2024-06-10 19:05:39.783984] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.137 NewBaseBdev 00:21:25.137 19:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:25.137 19:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:21:25.138 19:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:21:25.138 19:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:21:25.138 19:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:21:25.138 19:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:21:25.138 19:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.396 19:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:25.655 [ 00:21:25.655 { 00:21:25.655 "name": "NewBaseBdev", 00:21:25.655 "aliases": [ 00:21:25.655 "37a97d5d-0869-434c-8b7d-05067d35abbb" 00:21:25.655 ], 00:21:25.655 "product_name": "Malloc disk", 00:21:25.655 "block_size": 512, 00:21:25.655 "num_blocks": 65536, 00:21:25.655 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:25.655 "assigned_rate_limits": { 00:21:25.655 "rw_ios_per_sec": 0, 00:21:25.655 "rw_mbytes_per_sec": 0, 00:21:25.655 "r_mbytes_per_sec": 0, 00:21:25.655 "w_mbytes_per_sec": 0 00:21:25.655 }, 00:21:25.655 "claimed": true, 00:21:25.655 "claim_type": "exclusive_write", 00:21:25.655 "zoned": false, 00:21:25.655 "supported_io_types": { 00:21:25.655 "read": true, 00:21:25.655 "write": true, 00:21:25.655 "unmap": true, 00:21:25.655 "write_zeroes": true, 00:21:25.655 "flush": true, 00:21:25.655 "reset": true, 00:21:25.655 "compare": false, 00:21:25.655 "compare_and_write": false, 00:21:25.655 "abort": true, 00:21:25.655 "nvme_admin": false, 00:21:25.655 "nvme_io": false 00:21:25.655 }, 00:21:25.655 "memory_domains": [ 00:21:25.655 { 00:21:25.655 "dma_device_id": "system", 00:21:25.655 "dma_device_type": 1 00:21:25.655 }, 00:21:25.655 { 00:21:25.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.655 "dma_device_type": 2 00:21:25.655 } 00:21:25.655 ], 00:21:25.655 "driver_specific": {} 00:21:25.655 } 00:21:25.655 ] 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.655 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.914 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.914 "name": "Existed_Raid", 00:21:25.914 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:25.914 "strip_size_kb": 64, 00:21:25.914 "state": "online", 00:21:25.914 "raid_level": "concat", 00:21:25.914 "superblock": true, 00:21:25.914 "num_base_bdevs": 4, 00:21:25.914 "num_base_bdevs_discovered": 4, 00:21:25.914 "num_base_bdevs_operational": 4, 00:21:25.914 "base_bdevs_list": [ 00:21:25.914 { 00:21:25.914 "name": "NewBaseBdev", 00:21:25.914 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:25.914 "is_configured": true, 00:21:25.914 "data_offset": 2048, 00:21:25.914 "data_size": 63488 00:21:25.914 }, 00:21:25.914 { 00:21:25.914 "name": "BaseBdev2", 00:21:25.914 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:25.914 "is_configured": true, 00:21:25.914 "data_offset": 2048, 00:21:25.914 "data_size": 63488 00:21:25.914 }, 00:21:25.914 { 00:21:25.914 "name": "BaseBdev3", 00:21:25.914 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:25.914 "is_configured": true, 00:21:25.914 "data_offset": 2048, 00:21:25.914 "data_size": 63488 00:21:25.914 }, 00:21:25.914 { 00:21:25.914 "name": "BaseBdev4", 00:21:25.914 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:25.914 "is_configured": true, 00:21:25.914 "data_offset": 2048, 00:21:25.914 "data_size": 63488 00:21:25.914 } 00:21:25.914 ] 00:21:25.914 }' 00:21:25.914 19:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.914 19:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:26.481 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:26.740 [2024-06-10 19:05:41.275569] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.740 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:26.740 "name": "Existed_Raid", 00:21:26.740 "aliases": [ 00:21:26.740 "da222cff-05df-471a-9351-94e9954dc59f" 00:21:26.740 ], 00:21:26.740 "product_name": "Raid Volume", 00:21:26.740 "block_size": 512, 00:21:26.740 "num_blocks": 253952, 00:21:26.740 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:26.740 "assigned_rate_limits": { 00:21:26.740 "rw_ios_per_sec": 0, 00:21:26.740 "rw_mbytes_per_sec": 0, 00:21:26.740 "r_mbytes_per_sec": 0, 00:21:26.740 "w_mbytes_per_sec": 0 00:21:26.740 }, 00:21:26.740 "claimed": false, 00:21:26.740 "zoned": false, 00:21:26.740 "supported_io_types": { 00:21:26.740 "read": true, 00:21:26.740 "write": true, 00:21:26.740 "unmap": true, 00:21:26.740 "write_zeroes": true, 00:21:26.740 "flush": true, 00:21:26.740 "reset": true, 00:21:26.740 "compare": false, 00:21:26.740 "compare_and_write": false, 00:21:26.740 "abort": false, 00:21:26.740 "nvme_admin": false, 00:21:26.740 "nvme_io": false 00:21:26.740 }, 00:21:26.740 "memory_domains": [ 00:21:26.740 { 00:21:26.740 "dma_device_id": "system", 00:21:26.740 "dma_device_type": 1 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.740 "dma_device_type": 2 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "system", 00:21:26.740 "dma_device_type": 1 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.740 "dma_device_type": 2 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "system", 00:21:26.740 "dma_device_type": 1 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.740 "dma_device_type": 2 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "system", 00:21:26.740 "dma_device_type": 1 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.740 "dma_device_type": 2 00:21:26.740 } 00:21:26.740 ], 00:21:26.740 "driver_specific": { 00:21:26.740 "raid": { 00:21:26.740 "uuid": "da222cff-05df-471a-9351-94e9954dc59f", 00:21:26.740 "strip_size_kb": 64, 00:21:26.740 "state": "online", 00:21:26.740 "raid_level": "concat", 00:21:26.740 "superblock": true, 00:21:26.740 "num_base_bdevs": 4, 00:21:26.740 "num_base_bdevs_discovered": 4, 00:21:26.740 "num_base_bdevs_operational": 4, 00:21:26.740 "base_bdevs_list": [ 00:21:26.740 { 00:21:26.740 "name": "NewBaseBdev", 00:21:26.740 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:26.740 "is_configured": true, 00:21:26.740 "data_offset": 2048, 00:21:26.740 "data_size": 63488 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "name": "BaseBdev2", 00:21:26.740 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:26.740 "is_configured": true, 00:21:26.740 "data_offset": 2048, 00:21:26.740 "data_size": 63488 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "name": "BaseBdev3", 00:21:26.740 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:26.740 "is_configured": true, 00:21:26.740 "data_offset": 2048, 00:21:26.740 "data_size": 63488 00:21:26.740 }, 00:21:26.740 { 00:21:26.740 "name": "BaseBdev4", 00:21:26.740 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:26.740 "is_configured": true, 00:21:26.740 "data_offset": 2048, 00:21:26.740 "data_size": 63488 00:21:26.740 } 00:21:26.740 ] 00:21:26.740 } 00:21:26.740 } 00:21:26.740 }' 00:21:26.740 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:26.740 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:26.740 BaseBdev2 00:21:26.740 BaseBdev3 00:21:26.740 BaseBdev4' 00:21:26.740 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.740 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:26.740 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.999 "name": "NewBaseBdev", 00:21:26.999 "aliases": [ 00:21:26.999 "37a97d5d-0869-434c-8b7d-05067d35abbb" 00:21:26.999 ], 00:21:26.999 "product_name": "Malloc disk", 00:21:26.999 "block_size": 512, 00:21:26.999 "num_blocks": 65536, 00:21:26.999 "uuid": "37a97d5d-0869-434c-8b7d-05067d35abbb", 00:21:26.999 "assigned_rate_limits": { 00:21:26.999 "rw_ios_per_sec": 0, 00:21:26.999 "rw_mbytes_per_sec": 0, 00:21:26.999 "r_mbytes_per_sec": 0, 00:21:26.999 "w_mbytes_per_sec": 0 00:21:26.999 }, 00:21:26.999 "claimed": true, 00:21:26.999 "claim_type": "exclusive_write", 00:21:26.999 "zoned": false, 00:21:26.999 "supported_io_types": { 00:21:26.999 "read": true, 00:21:26.999 "write": true, 00:21:26.999 "unmap": true, 00:21:26.999 "write_zeroes": true, 00:21:26.999 "flush": true, 00:21:26.999 "reset": true, 00:21:26.999 "compare": false, 00:21:26.999 "compare_and_write": false, 00:21:26.999 "abort": true, 00:21:26.999 "nvme_admin": false, 00:21:26.999 "nvme_io": false 00:21:26.999 }, 00:21:26.999 "memory_domains": [ 00:21:26.999 { 00:21:26.999 "dma_device_id": "system", 00:21:26.999 "dma_device_type": 1 00:21:26.999 }, 00:21:26.999 { 00:21:26.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.999 "dma_device_type": 2 00:21:26.999 } 00:21:26.999 ], 00:21:26.999 "driver_specific": {} 00:21:26.999 }' 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.999 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:27.258 19:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:27.518 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:27.518 "name": "BaseBdev2", 00:21:27.518 "aliases": [ 00:21:27.518 "3272fe12-3289-4990-a5f4-adc54c66ca5c" 00:21:27.518 ], 00:21:27.518 "product_name": "Malloc disk", 00:21:27.518 "block_size": 512, 00:21:27.518 "num_blocks": 65536, 00:21:27.518 "uuid": "3272fe12-3289-4990-a5f4-adc54c66ca5c", 00:21:27.518 "assigned_rate_limits": { 00:21:27.518 "rw_ios_per_sec": 0, 00:21:27.518 "rw_mbytes_per_sec": 0, 00:21:27.518 "r_mbytes_per_sec": 0, 00:21:27.518 "w_mbytes_per_sec": 0 00:21:27.518 }, 00:21:27.518 "claimed": true, 00:21:27.518 "claim_type": "exclusive_write", 00:21:27.518 "zoned": false, 00:21:27.518 "supported_io_types": { 00:21:27.518 "read": true, 00:21:27.518 "write": true, 00:21:27.518 "unmap": true, 00:21:27.518 "write_zeroes": true, 00:21:27.518 "flush": true, 00:21:27.518 "reset": true, 00:21:27.518 "compare": false, 00:21:27.518 "compare_and_write": false, 00:21:27.518 "abort": true, 00:21:27.518 "nvme_admin": false, 00:21:27.518 "nvme_io": false 00:21:27.518 }, 00:21:27.518 "memory_domains": [ 00:21:27.518 { 00:21:27.518 "dma_device_id": "system", 00:21:27.518 "dma_device_type": 1 00:21:27.518 }, 00:21:27.518 { 00:21:27.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.518 "dma_device_type": 2 00:21:27.518 } 00:21:27.518 ], 00:21:27.518 "driver_specific": {} 00:21:27.518 }' 00:21:27.518 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.518 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.518 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:27.518 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.518 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:27.777 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:28.037 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:28.037 "name": "BaseBdev3", 00:21:28.037 "aliases": [ 00:21:28.037 "978e2b7b-f2a7-4316-b140-3b622a1bb8b3" 00:21:28.037 ], 00:21:28.037 "product_name": "Malloc disk", 00:21:28.037 "block_size": 512, 00:21:28.037 "num_blocks": 65536, 00:21:28.037 "uuid": "978e2b7b-f2a7-4316-b140-3b622a1bb8b3", 00:21:28.037 "assigned_rate_limits": { 00:21:28.037 "rw_ios_per_sec": 0, 00:21:28.037 "rw_mbytes_per_sec": 0, 00:21:28.037 "r_mbytes_per_sec": 0, 00:21:28.037 "w_mbytes_per_sec": 0 00:21:28.037 }, 00:21:28.037 "claimed": true, 00:21:28.037 "claim_type": "exclusive_write", 00:21:28.037 "zoned": false, 00:21:28.037 "supported_io_types": { 00:21:28.037 "read": true, 00:21:28.037 "write": true, 00:21:28.037 "unmap": true, 00:21:28.037 "write_zeroes": true, 00:21:28.037 "flush": true, 00:21:28.037 "reset": true, 00:21:28.037 "compare": false, 00:21:28.037 "compare_and_write": false, 00:21:28.037 "abort": true, 00:21:28.037 "nvme_admin": false, 00:21:28.037 "nvme_io": false 00:21:28.037 }, 00:21:28.037 "memory_domains": [ 00:21:28.037 { 00:21:28.037 "dma_device_id": "system", 00:21:28.037 "dma_device_type": 1 00:21:28.037 }, 00:21:28.037 { 00:21:28.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.037 "dma_device_type": 2 00:21:28.037 } 00:21:28.037 ], 00:21:28.037 "driver_specific": {} 00:21:28.037 }' 00:21:28.037 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.037 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.037 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:28.037 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.296 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.296 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:28.296 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.296 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.296 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:28.296 19:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.296 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.296 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:28.296 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:28.554 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:28.554 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:28.554 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:28.554 "name": "BaseBdev4", 00:21:28.554 "aliases": [ 00:21:28.554 "c3a4df03-7ee4-409d-8760-3c60359f9dce" 00:21:28.554 ], 00:21:28.554 "product_name": "Malloc disk", 00:21:28.554 "block_size": 512, 00:21:28.554 "num_blocks": 65536, 00:21:28.554 "uuid": "c3a4df03-7ee4-409d-8760-3c60359f9dce", 00:21:28.554 "assigned_rate_limits": { 00:21:28.554 "rw_ios_per_sec": 0, 00:21:28.554 "rw_mbytes_per_sec": 0, 00:21:28.554 "r_mbytes_per_sec": 0, 00:21:28.554 "w_mbytes_per_sec": 0 00:21:28.554 }, 00:21:28.554 "claimed": true, 00:21:28.554 "claim_type": "exclusive_write", 00:21:28.554 "zoned": false, 00:21:28.554 "supported_io_types": { 00:21:28.554 "read": true, 00:21:28.554 "write": true, 00:21:28.554 "unmap": true, 00:21:28.554 "write_zeroes": true, 00:21:28.554 "flush": true, 00:21:28.554 "reset": true, 00:21:28.554 "compare": false, 00:21:28.554 "compare_and_write": false, 00:21:28.554 "abort": true, 00:21:28.554 "nvme_admin": false, 00:21:28.554 "nvme_io": false 00:21:28.554 }, 00:21:28.554 "memory_domains": [ 00:21:28.554 { 00:21:28.554 "dma_device_id": "system", 00:21:28.554 "dma_device_type": 1 00:21:28.554 }, 00:21:28.554 { 00:21:28.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.554 "dma_device_type": 2 00:21:28.554 } 00:21:28.554 ], 00:21:28.554 "driver_specific": {} 00:21:28.554 }' 00:21:28.554 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.812 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.071 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:29.071 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:29.071 [2024-06-10 19:05:43.806029] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.071 [2024-06-10 19:05:43.806052] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.071 [2024-06-10 19:05:43.806107] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.071 [2024-06-10 19:05:43.806169] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.071 [2024-06-10 19:05:43.806180] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x241b000 name Existed_Raid, state offline 00:21:29.071 19:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1714291 00:21:29.071 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1714291 ']' 00:21:29.071 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1714291 00:21:29.071 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1714291 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1714291' 00:21:29.330 killing process with pid 1714291 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1714291 00:21:29.330 [2024-06-10 19:05:43.879387] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.330 19:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1714291 00:21:29.330 [2024-06-10 19:05:43.930288] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.589 19:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:29.589 00:21:29.589 real 0m30.655s 00:21:29.589 user 0m56.022s 00:21:29.589 sys 0m5.586s 00:21:29.589 19:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:29.589 19:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.589 ************************************ 00:21:29.589 END TEST raid_state_function_test_sb 00:21:29.589 ************************************ 00:21:29.589 19:05:44 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:29.589 19:05:44 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:29.589 19:05:44 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:29.589 19:05:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.589 ************************************ 00:21:29.589 START TEST raid_superblock_test 00:21:29.589 ************************************ 00:21:29.589 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 4 00:21:29.589 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:21:29.589 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1720179 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1720179 /var/tmp/spdk-raid.sock 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1720179 ']' 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:29.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:29.590 19:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.850 [2024-06-10 19:05:44.366733] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:21:29.850 [2024-06-10 19:05:44.366788] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720179 ] 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.0 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.1 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.2 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.3 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.4 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.5 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.6 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:01.7 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.0 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.1 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.2 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.3 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.4 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.5 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.6 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b6:02.7 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.0 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.1 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.2 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.3 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.4 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.5 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.6 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:01.7 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.0 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.1 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.2 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.3 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.4 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.5 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.6 cannot be used 00:21:29.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.850 EAL: Requested device 0000:b8:02.7 cannot be used 00:21:29.850 [2024-06-10 19:05:44.500106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.850 [2024-06-10 19:05:44.584091] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.110 [2024-06-10 19:05:44.644219] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.110 [2024-06-10 19:05:44.644255] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:30.678 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:30.936 malloc1 00:21:30.936 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:31.195 [2024-06-10 19:05:45.714456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:31.195 [2024-06-10 19:05:45.714503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.195 [2024-06-10 19:05:45.714520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f2b70 00:21:31.195 [2024-06-10 19:05:45.714532] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.195 [2024-06-10 19:05:45.715993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.195 [2024-06-10 19:05:45.716021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:31.195 pt1 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.195 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:31.454 malloc2 00:21:31.454 19:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:31.454 [2024-06-10 19:05:46.184079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:31.454 [2024-06-10 19:05:46.184121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.454 [2024-06-10 19:05:46.184136] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f3f70 00:21:31.454 [2024-06-10 19:05:46.184148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.454 [2024-06-10 19:05:46.185472] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.454 [2024-06-10 19:05:46.185498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:31.454 pt2 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.454 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:31.714 malloc3 00:21:31.714 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:31.973 [2024-06-10 19:05:46.645455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:31.973 [2024-06-10 19:05:46.645491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.973 [2024-06-10 19:05:46.645506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x158a940 00:21:31.973 [2024-06-10 19:05:46.645517] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.973 [2024-06-10 19:05:46.646768] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.973 [2024-06-10 19:05:46.646794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:31.973 pt3 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.973 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:32.232 malloc4 00:21:32.232 19:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:32.492 [2024-06-10 19:05:47.062588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:32.492 [2024-06-10 19:05:47.062625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.492 [2024-06-10 19:05:47.062641] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13ea900 00:21:32.492 [2024-06-10 19:05:47.062652] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.492 [2024-06-10 19:05:47.063882] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.492 [2024-06-10 19:05:47.063906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:32.492 pt4 00:21:32.492 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:32.492 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:32.492 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:32.752 [2024-06-10 19:05:47.291204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:32.752 [2024-06-10 19:05:47.292261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:32.752 [2024-06-10 19:05:47.292308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:32.752 [2024-06-10 19:05:47.292347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:32.752 [2024-06-10 19:05:47.292496] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13ec800 00:21:32.752 [2024-06-10 19:05:47.292506] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:32.752 [2024-06-10 19:05:47.292674] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13eab90 00:21:32.752 [2024-06-10 19:05:47.292800] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13ec800 00:21:32.752 [2024-06-10 19:05:47.292810] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13ec800 00:21:32.752 [2024-06-10 19:05:47.292890] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.752 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.011 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.011 "name": "raid_bdev1", 00:21:33.011 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:33.011 "strip_size_kb": 64, 00:21:33.011 "state": "online", 00:21:33.011 "raid_level": "concat", 00:21:33.011 "superblock": true, 00:21:33.011 "num_base_bdevs": 4, 00:21:33.011 "num_base_bdevs_discovered": 4, 00:21:33.011 "num_base_bdevs_operational": 4, 00:21:33.011 "base_bdevs_list": [ 00:21:33.011 { 00:21:33.011 "name": "pt1", 00:21:33.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.011 "is_configured": true, 00:21:33.011 "data_offset": 2048, 00:21:33.011 "data_size": 63488 00:21:33.011 }, 00:21:33.011 { 00:21:33.011 "name": "pt2", 00:21:33.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.011 "is_configured": true, 00:21:33.011 "data_offset": 2048, 00:21:33.011 "data_size": 63488 00:21:33.011 }, 00:21:33.011 { 00:21:33.011 "name": "pt3", 00:21:33.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.011 "is_configured": true, 00:21:33.011 "data_offset": 2048, 00:21:33.011 "data_size": 63488 00:21:33.011 }, 00:21:33.011 { 00:21:33.011 "name": "pt4", 00:21:33.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:33.011 "is_configured": true, 00:21:33.011 "data_offset": 2048, 00:21:33.011 "data_size": 63488 00:21:33.011 } 00:21:33.011 ] 00:21:33.011 }' 00:21:33.011 19:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.011 19:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:33.580 [2024-06-10 19:05:48.314126] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.580 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:33.580 "name": "raid_bdev1", 00:21:33.580 "aliases": [ 00:21:33.580 "439920da-155c-4452-aa42-75d5fee9872c" 00:21:33.580 ], 00:21:33.580 "product_name": "Raid Volume", 00:21:33.580 "block_size": 512, 00:21:33.580 "num_blocks": 253952, 00:21:33.580 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:33.580 "assigned_rate_limits": { 00:21:33.580 "rw_ios_per_sec": 0, 00:21:33.580 "rw_mbytes_per_sec": 0, 00:21:33.580 "r_mbytes_per_sec": 0, 00:21:33.580 "w_mbytes_per_sec": 0 00:21:33.580 }, 00:21:33.580 "claimed": false, 00:21:33.580 "zoned": false, 00:21:33.580 "supported_io_types": { 00:21:33.580 "read": true, 00:21:33.580 "write": true, 00:21:33.580 "unmap": true, 00:21:33.580 "write_zeroes": true, 00:21:33.580 "flush": true, 00:21:33.580 "reset": true, 00:21:33.580 "compare": false, 00:21:33.580 "compare_and_write": false, 00:21:33.580 "abort": false, 00:21:33.580 "nvme_admin": false, 00:21:33.580 "nvme_io": false 00:21:33.580 }, 00:21:33.580 "memory_domains": [ 00:21:33.580 { 00:21:33.580 "dma_device_id": "system", 00:21:33.580 "dma_device_type": 1 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.580 "dma_device_type": 2 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "system", 00:21:33.580 "dma_device_type": 1 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.580 "dma_device_type": 2 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "system", 00:21:33.580 "dma_device_type": 1 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.580 "dma_device_type": 2 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "system", 00:21:33.580 "dma_device_type": 1 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.580 "dma_device_type": 2 00:21:33.580 } 00:21:33.580 ], 00:21:33.580 "driver_specific": { 00:21:33.580 "raid": { 00:21:33.580 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:33.580 "strip_size_kb": 64, 00:21:33.580 "state": "online", 00:21:33.580 "raid_level": "concat", 00:21:33.580 "superblock": true, 00:21:33.580 "num_base_bdevs": 4, 00:21:33.580 "num_base_bdevs_discovered": 4, 00:21:33.580 "num_base_bdevs_operational": 4, 00:21:33.580 "base_bdevs_list": [ 00:21:33.580 { 00:21:33.580 "name": "pt1", 00:21:33.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.580 "is_configured": true, 00:21:33.580 "data_offset": 2048, 00:21:33.580 "data_size": 63488 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "name": "pt2", 00:21:33.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.580 "is_configured": true, 00:21:33.580 "data_offset": 2048, 00:21:33.580 "data_size": 63488 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "name": "pt3", 00:21:33.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.580 "is_configured": true, 00:21:33.580 "data_offset": 2048, 00:21:33.580 "data_size": 63488 00:21:33.580 }, 00:21:33.580 { 00:21:33.580 "name": "pt4", 00:21:33.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:33.580 "is_configured": true, 00:21:33.580 "data_offset": 2048, 00:21:33.580 "data_size": 63488 00:21:33.580 } 00:21:33.580 ] 00:21:33.580 } 00:21:33.580 } 00:21:33.580 }' 00:21:33.839 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.839 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:33.839 pt2 00:21:33.839 pt3 00:21:33.839 pt4' 00:21:33.839 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.839 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:33.839 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.098 "name": "pt1", 00:21:34.098 "aliases": [ 00:21:34.098 "00000000-0000-0000-0000-000000000001" 00:21:34.098 ], 00:21:34.098 "product_name": "passthru", 00:21:34.098 "block_size": 512, 00:21:34.098 "num_blocks": 65536, 00:21:34.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.098 "assigned_rate_limits": { 00:21:34.098 "rw_ios_per_sec": 0, 00:21:34.098 "rw_mbytes_per_sec": 0, 00:21:34.098 "r_mbytes_per_sec": 0, 00:21:34.098 "w_mbytes_per_sec": 0 00:21:34.098 }, 00:21:34.098 "claimed": true, 00:21:34.098 "claim_type": "exclusive_write", 00:21:34.098 "zoned": false, 00:21:34.098 "supported_io_types": { 00:21:34.098 "read": true, 00:21:34.098 "write": true, 00:21:34.098 "unmap": true, 00:21:34.098 "write_zeroes": true, 00:21:34.098 "flush": true, 00:21:34.098 "reset": true, 00:21:34.098 "compare": false, 00:21:34.098 "compare_and_write": false, 00:21:34.098 "abort": true, 00:21:34.098 "nvme_admin": false, 00:21:34.098 "nvme_io": false 00:21:34.098 }, 00:21:34.098 "memory_domains": [ 00:21:34.098 { 00:21:34.098 "dma_device_id": "system", 00:21:34.098 "dma_device_type": 1 00:21:34.098 }, 00:21:34.098 { 00:21:34.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.098 "dma_device_type": 2 00:21:34.098 } 00:21:34.098 ], 00:21:34.098 "driver_specific": { 00:21:34.098 "passthru": { 00:21:34.098 "name": "pt1", 00:21:34.098 "base_bdev_name": "malloc1" 00:21:34.098 } 00:21:34.098 } 00:21:34.098 }' 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.098 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.357 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.357 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.357 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.357 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:34.357 19:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.616 "name": "pt2", 00:21:34.616 "aliases": [ 00:21:34.616 "00000000-0000-0000-0000-000000000002" 00:21:34.616 ], 00:21:34.616 "product_name": "passthru", 00:21:34.616 "block_size": 512, 00:21:34.616 "num_blocks": 65536, 00:21:34.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.616 "assigned_rate_limits": { 00:21:34.616 "rw_ios_per_sec": 0, 00:21:34.616 "rw_mbytes_per_sec": 0, 00:21:34.616 "r_mbytes_per_sec": 0, 00:21:34.616 "w_mbytes_per_sec": 0 00:21:34.616 }, 00:21:34.616 "claimed": true, 00:21:34.616 "claim_type": "exclusive_write", 00:21:34.616 "zoned": false, 00:21:34.616 "supported_io_types": { 00:21:34.616 "read": true, 00:21:34.616 "write": true, 00:21:34.616 "unmap": true, 00:21:34.616 "write_zeroes": true, 00:21:34.616 "flush": true, 00:21:34.616 "reset": true, 00:21:34.616 "compare": false, 00:21:34.616 "compare_and_write": false, 00:21:34.616 "abort": true, 00:21:34.616 "nvme_admin": false, 00:21:34.616 "nvme_io": false 00:21:34.616 }, 00:21:34.616 "memory_domains": [ 00:21:34.616 { 00:21:34.616 "dma_device_id": "system", 00:21:34.616 "dma_device_type": 1 00:21:34.616 }, 00:21:34.616 { 00:21:34.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.616 "dma_device_type": 2 00:21:34.616 } 00:21:34.616 ], 00:21:34.616 "driver_specific": { 00:21:34.616 "passthru": { 00:21:34.616 "name": "pt2", 00:21:34.616 "base_bdev_name": "malloc2" 00:21:34.616 } 00:21:34.616 } 00:21:34.616 }' 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.616 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:34.876 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:35.135 "name": "pt3", 00:21:35.135 "aliases": [ 00:21:35.135 "00000000-0000-0000-0000-000000000003" 00:21:35.135 ], 00:21:35.135 "product_name": "passthru", 00:21:35.135 "block_size": 512, 00:21:35.135 "num_blocks": 65536, 00:21:35.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:35.135 "assigned_rate_limits": { 00:21:35.135 "rw_ios_per_sec": 0, 00:21:35.135 "rw_mbytes_per_sec": 0, 00:21:35.135 "r_mbytes_per_sec": 0, 00:21:35.135 "w_mbytes_per_sec": 0 00:21:35.135 }, 00:21:35.135 "claimed": true, 00:21:35.135 "claim_type": "exclusive_write", 00:21:35.135 "zoned": false, 00:21:35.135 "supported_io_types": { 00:21:35.135 "read": true, 00:21:35.135 "write": true, 00:21:35.135 "unmap": true, 00:21:35.135 "write_zeroes": true, 00:21:35.135 "flush": true, 00:21:35.135 "reset": true, 00:21:35.135 "compare": false, 00:21:35.135 "compare_and_write": false, 00:21:35.135 "abort": true, 00:21:35.135 "nvme_admin": false, 00:21:35.135 "nvme_io": false 00:21:35.135 }, 00:21:35.135 "memory_domains": [ 00:21:35.135 { 00:21:35.135 "dma_device_id": "system", 00:21:35.135 "dma_device_type": 1 00:21:35.135 }, 00:21:35.135 { 00:21:35.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.135 "dma_device_type": 2 00:21:35.135 } 00:21:35.135 ], 00:21:35.135 "driver_specific": { 00:21:35.135 "passthru": { 00:21:35.135 "name": "pt3", 00:21:35.135 "base_bdev_name": "malloc3" 00:21:35.135 } 00:21:35.135 } 00:21:35.135 }' 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:35.135 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.395 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.395 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:35.395 19:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.395 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.395 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.395 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:35.395 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:35.395 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:35.654 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:35.654 "name": "pt4", 00:21:35.654 "aliases": [ 00:21:35.654 "00000000-0000-0000-0000-000000000004" 00:21:35.654 ], 00:21:35.654 "product_name": "passthru", 00:21:35.654 "block_size": 512, 00:21:35.654 "num_blocks": 65536, 00:21:35.654 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:35.654 "assigned_rate_limits": { 00:21:35.654 "rw_ios_per_sec": 0, 00:21:35.654 "rw_mbytes_per_sec": 0, 00:21:35.654 "r_mbytes_per_sec": 0, 00:21:35.654 "w_mbytes_per_sec": 0 00:21:35.654 }, 00:21:35.654 "claimed": true, 00:21:35.654 "claim_type": "exclusive_write", 00:21:35.654 "zoned": false, 00:21:35.654 "supported_io_types": { 00:21:35.654 "read": true, 00:21:35.654 "write": true, 00:21:35.654 "unmap": true, 00:21:35.654 "write_zeroes": true, 00:21:35.654 "flush": true, 00:21:35.654 "reset": true, 00:21:35.654 "compare": false, 00:21:35.654 "compare_and_write": false, 00:21:35.654 "abort": true, 00:21:35.654 "nvme_admin": false, 00:21:35.654 "nvme_io": false 00:21:35.654 }, 00:21:35.654 "memory_domains": [ 00:21:35.654 { 00:21:35.654 "dma_device_id": "system", 00:21:35.654 "dma_device_type": 1 00:21:35.654 }, 00:21:35.654 { 00:21:35.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.654 "dma_device_type": 2 00:21:35.654 } 00:21:35.654 ], 00:21:35.654 "driver_specific": { 00:21:35.654 "passthru": { 00:21:35.654 "name": "pt4", 00:21:35.655 "base_bdev_name": "malloc4" 00:21:35.655 } 00:21:35.655 } 00:21:35.655 }' 00:21:35.655 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:35.655 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:35.655 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:35.655 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:35.913 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:36.172 [2024-06-10 19:05:50.840917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.172 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=439920da-155c-4452-aa42-75d5fee9872c 00:21:36.172 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 439920da-155c-4452-aa42-75d5fee9872c ']' 00:21:36.172 19:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:36.432 [2024-06-10 19:05:51.069267] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.432 [2024-06-10 19:05:51.069289] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.432 [2024-06-10 19:05:51.069335] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.432 [2024-06-10 19:05:51.069393] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.432 [2024-06-10 19:05:51.069405] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13ec800 name raid_bdev1, state offline 00:21:36.432 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.432 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:36.691 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:36.691 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:36.691 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.691 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:36.950 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.950 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:37.210 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:37.210 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:37.469 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:37.469 19:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:37.469 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:37.469 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:37.728 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:37.987 [2024-06-10 19:05:52.657387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:37.987 [2024-06-10 19:05:52.658656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:37.987 [2024-06-10 19:05:52.658697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:37.987 [2024-06-10 19:05:52.658729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:37.987 [2024-06-10 19:05:52.658770] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:37.987 [2024-06-10 19:05:52.658812] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:37.987 [2024-06-10 19:05:52.658833] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:37.987 [2024-06-10 19:05:52.658853] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:37.987 [2024-06-10 19:05:52.658870] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.987 [2024-06-10 19:05:52.658879] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13f3010 name raid_bdev1, state configuring 00:21:37.987 request: 00:21:37.987 { 00:21:37.987 "name": "raid_bdev1", 00:21:37.987 "raid_level": "concat", 00:21:37.987 "base_bdevs": [ 00:21:37.987 "malloc1", 00:21:37.987 "malloc2", 00:21:37.987 "malloc3", 00:21:37.987 "malloc4" 00:21:37.987 ], 00:21:37.987 "superblock": false, 00:21:37.987 "strip_size_kb": 64, 00:21:37.988 "method": "bdev_raid_create", 00:21:37.988 "req_id": 1 00:21:37.988 } 00:21:37.988 Got JSON-RPC error response 00:21:37.988 response: 00:21:37.988 { 00:21:37.988 "code": -17, 00:21:37.988 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:37.988 } 00:21:37.988 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:21:37.988 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:37.988 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:37.988 19:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:37.988 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.988 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:38.247 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:38.247 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:38.247 19:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:38.507 [2024-06-10 19:05:53.102504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:38.507 [2024-06-10 19:05:53.102549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.507 [2024-06-10 19:05:53.102568] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f2da0 00:21:38.507 [2024-06-10 19:05:53.102590] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.507 [2024-06-10 19:05:53.104124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.507 [2024-06-10 19:05:53.104155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:38.507 [2024-06-10 19:05:53.104219] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:38.507 [2024-06-10 19:05:53.104246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:38.507 pt1 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.507 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.766 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.766 "name": "raid_bdev1", 00:21:38.766 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:38.766 "strip_size_kb": 64, 00:21:38.766 "state": "configuring", 00:21:38.766 "raid_level": "concat", 00:21:38.766 "superblock": true, 00:21:38.766 "num_base_bdevs": 4, 00:21:38.766 "num_base_bdevs_discovered": 1, 00:21:38.766 "num_base_bdevs_operational": 4, 00:21:38.766 "base_bdevs_list": [ 00:21:38.766 { 00:21:38.766 "name": "pt1", 00:21:38.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.766 "is_configured": true, 00:21:38.766 "data_offset": 2048, 00:21:38.766 "data_size": 63488 00:21:38.766 }, 00:21:38.766 { 00:21:38.766 "name": null, 00:21:38.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.766 "is_configured": false, 00:21:38.766 "data_offset": 2048, 00:21:38.766 "data_size": 63488 00:21:38.766 }, 00:21:38.766 { 00:21:38.766 "name": null, 00:21:38.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.766 "is_configured": false, 00:21:38.766 "data_offset": 2048, 00:21:38.766 "data_size": 63488 00:21:38.766 }, 00:21:38.766 { 00:21:38.766 "name": null, 00:21:38.766 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:38.766 "is_configured": false, 00:21:38.766 "data_offset": 2048, 00:21:38.766 "data_size": 63488 00:21:38.766 } 00:21:38.766 ] 00:21:38.766 }' 00:21:38.766 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.766 19:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.334 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:21:39.334 19:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:39.593 [2024-06-10 19:05:54.121202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:39.593 [2024-06-10 19:05:54.121255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.593 [2024-06-10 19:05:54.121273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x158c080 00:21:39.593 [2024-06-10 19:05:54.121285] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.593 [2024-06-10 19:05:54.121638] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.593 [2024-06-10 19:05:54.121656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:39.594 [2024-06-10 19:05:54.121718] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:39.594 [2024-06-10 19:05:54.121737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:39.594 pt2 00:21:39.594 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:39.594 [2024-06-10 19:05:54.345810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.853 "name": "raid_bdev1", 00:21:39.853 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:39.853 "strip_size_kb": 64, 00:21:39.853 "state": "configuring", 00:21:39.853 "raid_level": "concat", 00:21:39.853 "superblock": true, 00:21:39.853 "num_base_bdevs": 4, 00:21:39.853 "num_base_bdevs_discovered": 1, 00:21:39.853 "num_base_bdevs_operational": 4, 00:21:39.853 "base_bdevs_list": [ 00:21:39.853 { 00:21:39.853 "name": "pt1", 00:21:39.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.853 "is_configured": true, 00:21:39.853 "data_offset": 2048, 00:21:39.853 "data_size": 63488 00:21:39.853 }, 00:21:39.853 { 00:21:39.853 "name": null, 00:21:39.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.853 "is_configured": false, 00:21:39.853 "data_offset": 2048, 00:21:39.853 "data_size": 63488 00:21:39.853 }, 00:21:39.853 { 00:21:39.853 "name": null, 00:21:39.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.853 "is_configured": false, 00:21:39.853 "data_offset": 2048, 00:21:39.853 "data_size": 63488 00:21:39.853 }, 00:21:39.853 { 00:21:39.853 "name": null, 00:21:39.853 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:39.853 "is_configured": false, 00:21:39.853 "data_offset": 2048, 00:21:39.853 "data_size": 63488 00:21:39.853 } 00:21:39.853 ] 00:21:39.853 }' 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.853 19:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.422 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:40.422 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:40.422 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.767 [2024-06-10 19:05:55.328631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.767 [2024-06-10 19:05:55.328684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.767 [2024-06-10 19:05:55.328701] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13e96e0 00:21:40.767 [2024-06-10 19:05:55.328713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.767 [2024-06-10 19:05:55.329032] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.767 [2024-06-10 19:05:55.329047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.767 [2024-06-10 19:05:55.329105] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:40.767 [2024-06-10 19:05:55.329123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.767 pt2 00:21:40.767 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:40.767 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:40.767 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:41.027 [2024-06-10 19:05:55.557229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:41.027 [2024-06-10 19:05:55.557268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.027 [2024-06-10 19:05:55.557285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13ea630 00:21:41.027 [2024-06-10 19:05:55.557296] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.027 [2024-06-10 19:05:55.557589] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.027 [2024-06-10 19:05:55.557606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:41.027 [2024-06-10 19:05:55.557659] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:41.027 [2024-06-10 19:05:55.557676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:41.027 pt3 00:21:41.027 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:41.027 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:41.027 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:41.287 [2024-06-10 19:05:55.785820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:41.287 [2024-06-10 19:05:55.785845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.287 [2024-06-10 19:05:55.785859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13ed300 00:21:41.287 [2024-06-10 19:05:55.785870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.287 [2024-06-10 19:05:55.786107] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.287 [2024-06-10 19:05:55.786123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:41.287 [2024-06-10 19:05:55.786166] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:41.287 [2024-06-10 19:05:55.786182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:41.287 [2024-06-10 19:05:55.786284] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13ec490 00:21:41.287 [2024-06-10 19:05:55.786293] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:41.287 [2024-06-10 19:05:55.786447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x158bda0 00:21:41.287 [2024-06-10 19:05:55.786560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13ec490 00:21:41.287 [2024-06-10 19:05:55.786569] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13ec490 00:21:41.287 [2024-06-10 19:05:55.786663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.287 pt4 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.287 19:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.547 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.547 "name": "raid_bdev1", 00:21:41.547 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:41.547 "strip_size_kb": 64, 00:21:41.547 "state": "online", 00:21:41.547 "raid_level": "concat", 00:21:41.547 "superblock": true, 00:21:41.547 "num_base_bdevs": 4, 00:21:41.547 "num_base_bdevs_discovered": 4, 00:21:41.547 "num_base_bdevs_operational": 4, 00:21:41.547 "base_bdevs_list": [ 00:21:41.547 { 00:21:41.547 "name": "pt1", 00:21:41.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.547 "is_configured": true, 00:21:41.547 "data_offset": 2048, 00:21:41.547 "data_size": 63488 00:21:41.547 }, 00:21:41.547 { 00:21:41.547 "name": "pt2", 00:21:41.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.547 "is_configured": true, 00:21:41.547 "data_offset": 2048, 00:21:41.547 "data_size": 63488 00:21:41.547 }, 00:21:41.547 { 00:21:41.547 "name": "pt3", 00:21:41.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.547 "is_configured": true, 00:21:41.547 "data_offset": 2048, 00:21:41.547 "data_size": 63488 00:21:41.547 }, 00:21:41.547 { 00:21:41.547 "name": "pt4", 00:21:41.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.547 "is_configured": true, 00:21:41.547 "data_offset": 2048, 00:21:41.547 "data_size": 63488 00:21:41.547 } 00:21:41.547 ] 00:21:41.547 }' 00:21:41.547 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.547 19:05:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:42.116 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:42.117 [2024-06-10 19:05:56.844880] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.117 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:42.117 "name": "raid_bdev1", 00:21:42.117 "aliases": [ 00:21:42.117 "439920da-155c-4452-aa42-75d5fee9872c" 00:21:42.117 ], 00:21:42.117 "product_name": "Raid Volume", 00:21:42.117 "block_size": 512, 00:21:42.117 "num_blocks": 253952, 00:21:42.117 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:42.117 "assigned_rate_limits": { 00:21:42.117 "rw_ios_per_sec": 0, 00:21:42.117 "rw_mbytes_per_sec": 0, 00:21:42.117 "r_mbytes_per_sec": 0, 00:21:42.117 "w_mbytes_per_sec": 0 00:21:42.117 }, 00:21:42.117 "claimed": false, 00:21:42.117 "zoned": false, 00:21:42.117 "supported_io_types": { 00:21:42.117 "read": true, 00:21:42.117 "write": true, 00:21:42.117 "unmap": true, 00:21:42.117 "write_zeroes": true, 00:21:42.117 "flush": true, 00:21:42.117 "reset": true, 00:21:42.117 "compare": false, 00:21:42.117 "compare_and_write": false, 00:21:42.117 "abort": false, 00:21:42.117 "nvme_admin": false, 00:21:42.117 "nvme_io": false 00:21:42.117 }, 00:21:42.117 "memory_domains": [ 00:21:42.117 { 00:21:42.117 "dma_device_id": "system", 00:21:42.117 "dma_device_type": 1 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.117 "dma_device_type": 2 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "system", 00:21:42.117 "dma_device_type": 1 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.117 "dma_device_type": 2 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "system", 00:21:42.117 "dma_device_type": 1 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.117 "dma_device_type": 2 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "system", 00:21:42.117 "dma_device_type": 1 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.117 "dma_device_type": 2 00:21:42.117 } 00:21:42.117 ], 00:21:42.117 "driver_specific": { 00:21:42.117 "raid": { 00:21:42.117 "uuid": "439920da-155c-4452-aa42-75d5fee9872c", 00:21:42.117 "strip_size_kb": 64, 00:21:42.117 "state": "online", 00:21:42.117 "raid_level": "concat", 00:21:42.117 "superblock": true, 00:21:42.117 "num_base_bdevs": 4, 00:21:42.117 "num_base_bdevs_discovered": 4, 00:21:42.117 "num_base_bdevs_operational": 4, 00:21:42.117 "base_bdevs_list": [ 00:21:42.117 { 00:21:42.117 "name": "pt1", 00:21:42.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.117 "is_configured": true, 00:21:42.117 "data_offset": 2048, 00:21:42.117 "data_size": 63488 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "name": "pt2", 00:21:42.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.117 "is_configured": true, 00:21:42.117 "data_offset": 2048, 00:21:42.117 "data_size": 63488 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "name": "pt3", 00:21:42.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.117 "is_configured": true, 00:21:42.117 "data_offset": 2048, 00:21:42.117 "data_size": 63488 00:21:42.117 }, 00:21:42.117 { 00:21:42.117 "name": "pt4", 00:21:42.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:42.117 "is_configured": true, 00:21:42.117 "data_offset": 2048, 00:21:42.117 "data_size": 63488 00:21:42.117 } 00:21:42.117 ] 00:21:42.117 } 00:21:42.117 } 00:21:42.117 }' 00:21:42.117 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:42.377 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:42.377 pt2 00:21:42.377 pt3 00:21:42.377 pt4' 00:21:42.377 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.377 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:42.377 19:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:42.636 "name": "pt1", 00:21:42.636 "aliases": [ 00:21:42.636 "00000000-0000-0000-0000-000000000001" 00:21:42.636 ], 00:21:42.636 "product_name": "passthru", 00:21:42.636 "block_size": 512, 00:21:42.636 "num_blocks": 65536, 00:21:42.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.636 "assigned_rate_limits": { 00:21:42.636 "rw_ios_per_sec": 0, 00:21:42.636 "rw_mbytes_per_sec": 0, 00:21:42.636 "r_mbytes_per_sec": 0, 00:21:42.636 "w_mbytes_per_sec": 0 00:21:42.636 }, 00:21:42.636 "claimed": true, 00:21:42.636 "claim_type": "exclusive_write", 00:21:42.636 "zoned": false, 00:21:42.636 "supported_io_types": { 00:21:42.636 "read": true, 00:21:42.636 "write": true, 00:21:42.636 "unmap": true, 00:21:42.636 "write_zeroes": true, 00:21:42.636 "flush": true, 00:21:42.636 "reset": true, 00:21:42.636 "compare": false, 00:21:42.636 "compare_and_write": false, 00:21:42.636 "abort": true, 00:21:42.636 "nvme_admin": false, 00:21:42.636 "nvme_io": false 00:21:42.636 }, 00:21:42.636 "memory_domains": [ 00:21:42.636 { 00:21:42.636 "dma_device_id": "system", 00:21:42.636 "dma_device_type": 1 00:21:42.636 }, 00:21:42.636 { 00:21:42.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.636 "dma_device_type": 2 00:21:42.636 } 00:21:42.636 ], 00:21:42.636 "driver_specific": { 00:21:42.636 "passthru": { 00:21:42.636 "name": "pt1", 00:21:42.636 "base_bdev_name": "malloc1" 00:21:42.636 } 00:21:42.636 } 00:21:42.636 }' 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.636 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.637 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.637 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.896 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.896 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.896 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.896 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:42.896 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.155 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.155 "name": "pt2", 00:21:43.155 "aliases": [ 00:21:43.155 "00000000-0000-0000-0000-000000000002" 00:21:43.155 ], 00:21:43.155 "product_name": "passthru", 00:21:43.155 "block_size": 512, 00:21:43.155 "num_blocks": 65536, 00:21:43.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.155 "assigned_rate_limits": { 00:21:43.155 "rw_ios_per_sec": 0, 00:21:43.155 "rw_mbytes_per_sec": 0, 00:21:43.155 "r_mbytes_per_sec": 0, 00:21:43.155 "w_mbytes_per_sec": 0 00:21:43.155 }, 00:21:43.155 "claimed": true, 00:21:43.155 "claim_type": "exclusive_write", 00:21:43.155 "zoned": false, 00:21:43.155 "supported_io_types": { 00:21:43.155 "read": true, 00:21:43.155 "write": true, 00:21:43.155 "unmap": true, 00:21:43.155 "write_zeroes": true, 00:21:43.155 "flush": true, 00:21:43.155 "reset": true, 00:21:43.155 "compare": false, 00:21:43.155 "compare_and_write": false, 00:21:43.155 "abort": true, 00:21:43.155 "nvme_admin": false, 00:21:43.155 "nvme_io": false 00:21:43.155 }, 00:21:43.155 "memory_domains": [ 00:21:43.155 { 00:21:43.155 "dma_device_id": "system", 00:21:43.155 "dma_device_type": 1 00:21:43.155 }, 00:21:43.155 { 00:21:43.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.155 "dma_device_type": 2 00:21:43.155 } 00:21:43.155 ], 00:21:43.155 "driver_specific": { 00:21:43.155 "passthru": { 00:21:43.155 "name": "pt2", 00:21:43.155 "base_bdev_name": "malloc2" 00:21:43.155 } 00:21:43.155 } 00:21:43.155 }' 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.156 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.414 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.414 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.414 19:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.414 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.414 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.414 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.414 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:43.414 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.673 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.673 "name": "pt3", 00:21:43.673 "aliases": [ 00:21:43.673 "00000000-0000-0000-0000-000000000003" 00:21:43.673 ], 00:21:43.673 "product_name": "passthru", 00:21:43.673 "block_size": 512, 00:21:43.673 "num_blocks": 65536, 00:21:43.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.673 "assigned_rate_limits": { 00:21:43.673 "rw_ios_per_sec": 0, 00:21:43.673 "rw_mbytes_per_sec": 0, 00:21:43.673 "r_mbytes_per_sec": 0, 00:21:43.673 "w_mbytes_per_sec": 0 00:21:43.673 }, 00:21:43.673 "claimed": true, 00:21:43.673 "claim_type": "exclusive_write", 00:21:43.673 "zoned": false, 00:21:43.673 "supported_io_types": { 00:21:43.673 "read": true, 00:21:43.673 "write": true, 00:21:43.673 "unmap": true, 00:21:43.673 "write_zeroes": true, 00:21:43.673 "flush": true, 00:21:43.673 "reset": true, 00:21:43.673 "compare": false, 00:21:43.673 "compare_and_write": false, 00:21:43.673 "abort": true, 00:21:43.673 "nvme_admin": false, 00:21:43.673 "nvme_io": false 00:21:43.673 }, 00:21:43.673 "memory_domains": [ 00:21:43.673 { 00:21:43.673 "dma_device_id": "system", 00:21:43.673 "dma_device_type": 1 00:21:43.673 }, 00:21:43.673 { 00:21:43.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.673 "dma_device_type": 2 00:21:43.673 } 00:21:43.673 ], 00:21:43.673 "driver_specific": { 00:21:43.673 "passthru": { 00:21:43.673 "name": "pt3", 00:21:43.673 "base_bdev_name": "malloc3" 00:21:43.673 } 00:21:43.673 } 00:21:43.673 }' 00:21:43.673 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.673 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.673 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.673 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.673 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:43.933 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:44.192 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:44.192 "name": "pt4", 00:21:44.192 "aliases": [ 00:21:44.192 "00000000-0000-0000-0000-000000000004" 00:21:44.192 ], 00:21:44.192 "product_name": "passthru", 00:21:44.192 "block_size": 512, 00:21:44.192 "num_blocks": 65536, 00:21:44.192 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.192 "assigned_rate_limits": { 00:21:44.192 "rw_ios_per_sec": 0, 00:21:44.192 "rw_mbytes_per_sec": 0, 00:21:44.192 "r_mbytes_per_sec": 0, 00:21:44.192 "w_mbytes_per_sec": 0 00:21:44.192 }, 00:21:44.192 "claimed": true, 00:21:44.192 "claim_type": "exclusive_write", 00:21:44.192 "zoned": false, 00:21:44.192 "supported_io_types": { 00:21:44.192 "read": true, 00:21:44.192 "write": true, 00:21:44.192 "unmap": true, 00:21:44.192 "write_zeroes": true, 00:21:44.192 "flush": true, 00:21:44.192 "reset": true, 00:21:44.192 "compare": false, 00:21:44.192 "compare_and_write": false, 00:21:44.192 "abort": true, 00:21:44.192 "nvme_admin": false, 00:21:44.192 "nvme_io": false 00:21:44.192 }, 00:21:44.192 "memory_domains": [ 00:21:44.192 { 00:21:44.192 "dma_device_id": "system", 00:21:44.192 "dma_device_type": 1 00:21:44.192 }, 00:21:44.192 { 00:21:44.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.192 "dma_device_type": 2 00:21:44.192 } 00:21:44.192 ], 00:21:44.192 "driver_specific": { 00:21:44.192 "passthru": { 00:21:44.192 "name": "pt4", 00:21:44.192 "base_bdev_name": "malloc4" 00:21:44.192 } 00:21:44.192 } 00:21:44.192 }' 00:21:44.192 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.192 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.192 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:44.192 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.451 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.451 19:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:44.451 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:44.709 [2024-06-10 19:05:59.387817] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 439920da-155c-4452-aa42-75d5fee9872c '!=' 439920da-155c-4452-aa42-75d5fee9872c ']' 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1720179 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1720179 ']' 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1720179 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:44.709 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1720179 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1720179' 00:21:44.969 killing process with pid 1720179 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1720179 00:21:44.969 [2024-06-10 19:05:59.467444] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.969 [2024-06-10 19:05:59.467511] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.969 [2024-06-10 19:05:59.467570] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.969 [2024-06-10 19:05:59.467593] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13ec490 name raid_bdev1, state offline 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1720179 00:21:44.969 [2024-06-10 19:05:59.499343] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:44.969 00:21:44.969 real 0m15.380s 00:21:44.969 user 0m27.646s 00:21:44.969 sys 0m2.870s 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:44.969 19:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.969 ************************************ 00:21:44.969 END TEST raid_superblock_test 00:21:44.969 ************************************ 00:21:45.229 19:05:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:21:45.229 19:05:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:21:45.229 19:05:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:45.229 19:05:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.229 ************************************ 00:21:45.229 START TEST raid_read_error_test 00:21:45.229 ************************************ 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 4 read 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ItwthQfkWh 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1723169 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1723169 /var/tmp/spdk-raid.sock 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1723169 ']' 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:45.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:45.229 19:05:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.229 [2024-06-10 19:05:59.851044] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:21:45.230 [2024-06-10 19:05:59.851102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723169 ] 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.0 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.1 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.2 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.3 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.4 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.5 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.6 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:01.7 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.0 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.1 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.2 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.3 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.4 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.5 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.6 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b6:02.7 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.0 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.1 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.2 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.3 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.4 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.5 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.6 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:01.7 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.0 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.1 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.2 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.3 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.4 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.5 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.6 cannot be used 00:21:45.230 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:45.230 EAL: Requested device 0000:b8:02.7 cannot be used 00:21:45.230 [2024-06-10 19:05:59.972764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.489 [2024-06-10 19:06:00.069642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.489 [2024-06-10 19:06:00.130848] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.489 [2024-06-10 19:06:00.130886] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.057 19:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:46.057 19:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:21:46.057 19:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:46.057 19:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:46.317 BaseBdev1_malloc 00:21:46.317 19:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:46.575 true 00:21:46.576 19:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:46.834 [2024-06-10 19:06:01.420365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:46.834 [2024-06-10 19:06:01.420406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.834 [2024-06-10 19:06:01.420423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11fad50 00:21:46.834 [2024-06-10 19:06:01.420435] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.834 [2024-06-10 19:06:01.422018] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.834 [2024-06-10 19:06:01.422045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:46.834 BaseBdev1 00:21:46.834 19:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:46.834 19:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:47.093 BaseBdev2_malloc 00:21:47.093 19:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:47.352 true 00:21:47.352 19:06:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:47.352 [2024-06-10 19:06:02.086429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:47.352 [2024-06-10 19:06:02.086469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.352 [2024-06-10 19:06:02.086487] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12002e0 00:21:47.352 [2024-06-10 19:06:02.086498] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.352 [2024-06-10 19:06:02.087879] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.352 [2024-06-10 19:06:02.087911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:47.352 BaseBdev2 00:21:47.352 19:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:47.352 19:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:47.610 BaseBdev3_malloc 00:21:47.610 19:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:47.868 true 00:21:47.868 19:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:48.127 [2024-06-10 19:06:02.756414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:48.127 [2024-06-10 19:06:02.756453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.127 [2024-06-10 19:06:02.756471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1201fd0 00:21:48.127 [2024-06-10 19:06:02.756483] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.127 [2024-06-10 19:06:02.757860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.127 [2024-06-10 19:06:02.757887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:48.127 BaseBdev3 00:21:48.127 19:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:48.127 19:06:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:48.385 BaseBdev4_malloc 00:21:48.385 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:48.644 true 00:21:48.644 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:48.903 [2024-06-10 19:06:03.430398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:48.903 [2024-06-10 19:06:03.430436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.903 [2024-06-10 19:06:03.430455] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1203830 00:21:48.903 [2024-06-10 19:06:03.430466] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.903 [2024-06-10 19:06:03.431840] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.903 [2024-06-10 19:06:03.431866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:48.903 BaseBdev4 00:21:48.903 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:48.903 [2024-06-10 19:06:03.651008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.903 [2024-06-10 19:06:03.652170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.903 [2024-06-10 19:06:03.652233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.903 [2024-06-10 19:06:03.652289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:48.903 [2024-06-10 19:06:03.652500] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12043a0 00:21:48.903 [2024-06-10 19:06:03.652511] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:48.903 [2024-06-10 19:06:03.652693] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1205ac0 00:21:48.903 [2024-06-10 19:06:03.652828] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12043a0 00:21:48.903 [2024-06-10 19:06:03.652841] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12043a0 00:21:48.903 [2024-06-10 19:06:03.652933] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.161 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:49.161 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.162 "name": "raid_bdev1", 00:21:49.162 "uuid": "7eac82bb-c58e-4ee5-adc4-ad203b794785", 00:21:49.162 "strip_size_kb": 64, 00:21:49.162 "state": "online", 00:21:49.162 "raid_level": "concat", 00:21:49.162 "superblock": true, 00:21:49.162 "num_base_bdevs": 4, 00:21:49.162 "num_base_bdevs_discovered": 4, 00:21:49.162 "num_base_bdevs_operational": 4, 00:21:49.162 "base_bdevs_list": [ 00:21:49.162 { 00:21:49.162 "name": "BaseBdev1", 00:21:49.162 "uuid": "5c827857-6e89-54d9-b75e-7b8ab557762e", 00:21:49.162 "is_configured": true, 00:21:49.162 "data_offset": 2048, 00:21:49.162 "data_size": 63488 00:21:49.162 }, 00:21:49.162 { 00:21:49.162 "name": "BaseBdev2", 00:21:49.162 "uuid": "457d19d3-648d-57f4-a8d0-e284809321ae", 00:21:49.162 "is_configured": true, 00:21:49.162 "data_offset": 2048, 00:21:49.162 "data_size": 63488 00:21:49.162 }, 00:21:49.162 { 00:21:49.162 "name": "BaseBdev3", 00:21:49.162 "uuid": "e7dbc4e4-6ff0-5975-bed0-400a7a64a243", 00:21:49.162 "is_configured": true, 00:21:49.162 "data_offset": 2048, 00:21:49.162 "data_size": 63488 00:21:49.162 }, 00:21:49.162 { 00:21:49.162 "name": "BaseBdev4", 00:21:49.162 "uuid": "3b67fd7d-147b-5bb6-a2f3-a490b688919e", 00:21:49.162 "is_configured": true, 00:21:49.162 "data_offset": 2048, 00:21:49.162 "data_size": 63488 00:21:49.162 } 00:21:49.162 ] 00:21:49.162 }' 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.162 19:06:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.728 19:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:49.728 19:06:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:49.987 [2024-06-10 19:06:04.573650] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1205940 00:21:50.925 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.184 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.444 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.444 "name": "raid_bdev1", 00:21:51.444 "uuid": "7eac82bb-c58e-4ee5-adc4-ad203b794785", 00:21:51.444 "strip_size_kb": 64, 00:21:51.444 "state": "online", 00:21:51.444 "raid_level": "concat", 00:21:51.444 "superblock": true, 00:21:51.444 "num_base_bdevs": 4, 00:21:51.444 "num_base_bdevs_discovered": 4, 00:21:51.444 "num_base_bdevs_operational": 4, 00:21:51.444 "base_bdevs_list": [ 00:21:51.444 { 00:21:51.444 "name": "BaseBdev1", 00:21:51.444 "uuid": "5c827857-6e89-54d9-b75e-7b8ab557762e", 00:21:51.444 "is_configured": true, 00:21:51.444 "data_offset": 2048, 00:21:51.444 "data_size": 63488 00:21:51.444 }, 00:21:51.444 { 00:21:51.444 "name": "BaseBdev2", 00:21:51.444 "uuid": "457d19d3-648d-57f4-a8d0-e284809321ae", 00:21:51.444 "is_configured": true, 00:21:51.444 "data_offset": 2048, 00:21:51.444 "data_size": 63488 00:21:51.444 }, 00:21:51.444 { 00:21:51.444 "name": "BaseBdev3", 00:21:51.444 "uuid": "e7dbc4e4-6ff0-5975-bed0-400a7a64a243", 00:21:51.444 "is_configured": true, 00:21:51.444 "data_offset": 2048, 00:21:51.444 "data_size": 63488 00:21:51.444 }, 00:21:51.444 { 00:21:51.444 "name": "BaseBdev4", 00:21:51.444 "uuid": "3b67fd7d-147b-5bb6-a2f3-a490b688919e", 00:21:51.444 "is_configured": true, 00:21:51.444 "data_offset": 2048, 00:21:51.444 "data_size": 63488 00:21:51.444 } 00:21:51.444 ] 00:21:51.444 }' 00:21:51.444 19:06:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.444 19:06:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.012 19:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:52.012 [2024-06-10 19:06:06.715415] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.012 [2024-06-10 19:06:06.715447] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.012 [2024-06-10 19:06:06.718357] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.012 [2024-06-10 19:06:06.718392] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.012 [2024-06-10 19:06:06.718426] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.012 [2024-06-10 19:06:06.718436] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12043a0 name raid_bdev1, state offline 00:21:52.012 0 00:21:52.012 19:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1723169 00:21:52.013 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1723169 ']' 00:21:52.013 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1723169 00:21:52.013 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:21:52.013 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:52.013 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1723169 00:21:52.272 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:52.273 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:52.273 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1723169' 00:21:52.273 killing process with pid 1723169 00:21:52.273 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1723169 00:21:52.273 [2024-06-10 19:06:06.794835] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.273 19:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1723169 00:21:52.273 [2024-06-10 19:06:06.820813] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ItwthQfkWh 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:21:52.273 00:21:52.273 real 0m7.253s 00:21:52.273 user 0m11.549s 00:21:52.273 sys 0m1.280s 00:21:52.273 19:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:52.533 19:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.533 ************************************ 00:21:52.533 END TEST raid_read_error_test 00:21:52.533 ************************************ 00:21:52.533 19:06:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:21:52.533 19:06:07 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:21:52.533 19:06:07 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:52.533 19:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.533 ************************************ 00:21:52.533 START TEST raid_write_error_test 00:21:52.533 ************************************ 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 4 write 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8e6fUdr8UI 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1724972 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1724972 /var/tmp/spdk-raid.sock 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1724972 ']' 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:52.533 19:06:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.533 [2024-06-10 19:06:07.190075] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:21:52.533 [2024-06-10 19:06:07.190136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724972 ] 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.0 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.1 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.2 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.3 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.4 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.5 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.6 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:01.7 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.0 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.1 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.2 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.3 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.4 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.5 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.6 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b6:02.7 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b8:01.0 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b8:01.1 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.533 EAL: Requested device 0000:b8:01.2 cannot be used 00:21:52.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:01.3 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:01.4 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:01.5 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:01.6 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:01.7 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.0 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.1 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.2 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.3 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.4 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.5 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.6 cannot be used 00:21:52.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.534 EAL: Requested device 0000:b8:02.7 cannot be used 00:21:52.793 [2024-06-10 19:06:07.322714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.793 [2024-06-10 19:06:07.409254] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.793 [2024-06-10 19:06:07.483271] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.793 [2024-06-10 19:06:07.483300] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.362 19:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:53.362 19:06:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:21:53.362 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:53.362 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:53.622 BaseBdev1_malloc 00:21:53.622 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:53.881 true 00:21:53.881 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:54.141 [2024-06-10 19:06:08.728204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:54.141 [2024-06-10 19:06:08.728243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.141 [2024-06-10 19:06:08.728260] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b2d50 00:21:54.141 [2024-06-10 19:06:08.728271] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.141 [2024-06-10 19:06:08.729803] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.141 [2024-06-10 19:06:08.729829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:54.141 BaseBdev1 00:21:54.141 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:54.141 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:54.400 BaseBdev2_malloc 00:21:54.400 19:06:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:54.660 true 00:21:54.660 19:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:54.660 [2024-06-10 19:06:09.414355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:54.660 [2024-06-10 19:06:09.414394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.660 [2024-06-10 19:06:09.414411] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b82e0 00:21:54.660 [2024-06-10 19:06:09.414422] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.660 [2024-06-10 19:06:09.415798] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.660 [2024-06-10 19:06:09.415823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:54.919 BaseBdev2 00:21:54.919 19:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:54.919 19:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:54.919 BaseBdev3_malloc 00:21:54.919 19:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:55.179 true 00:21:55.179 19:06:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:55.438 [2024-06-10 19:06:10.096514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:55.438 [2024-06-10 19:06:10.096557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.438 [2024-06-10 19:06:10.096581] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b9fd0 00:21:55.438 [2024-06-10 19:06:10.096592] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.438 [2024-06-10 19:06:10.098035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.438 [2024-06-10 19:06:10.098063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:55.438 BaseBdev3 00:21:55.438 19:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:55.438 19:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:55.697 BaseBdev4_malloc 00:21:55.697 19:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:55.956 true 00:21:55.956 19:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:56.215 [2024-06-10 19:06:10.766496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:56.215 [2024-06-10 19:06:10.766531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.215 [2024-06-10 19:06:10.766549] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12bb830 00:21:56.215 [2024-06-10 19:06:10.766561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.215 [2024-06-10 19:06:10.767886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.215 [2024-06-10 19:06:10.767912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:56.215 BaseBdev4 00:21:56.215 19:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:56.475 [2024-06-10 19:06:10.991126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.475 [2024-06-10 19:06:10.992327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.475 [2024-06-10 19:06:10.992391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:56.475 [2024-06-10 19:06:10.992448] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:56.475 [2024-06-10 19:06:10.992666] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12bc3a0 00:21:56.475 [2024-06-10 19:06:10.992677] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:56.475 [2024-06-10 19:06:10.992849] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12bdac0 00:21:56.475 [2024-06-10 19:06:10.992983] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12bc3a0 00:21:56.475 [2024-06-10 19:06:10.992992] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12bc3a0 00:21:56.475 [2024-06-10 19:06:10.993084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.475 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.734 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.734 "name": "raid_bdev1", 00:21:56.734 "uuid": "50110955-125e-46f4-93d5-1f04186c860c", 00:21:56.734 "strip_size_kb": 64, 00:21:56.734 "state": "online", 00:21:56.734 "raid_level": "concat", 00:21:56.734 "superblock": true, 00:21:56.734 "num_base_bdevs": 4, 00:21:56.734 "num_base_bdevs_discovered": 4, 00:21:56.734 "num_base_bdevs_operational": 4, 00:21:56.734 "base_bdevs_list": [ 00:21:56.734 { 00:21:56.734 "name": "BaseBdev1", 00:21:56.734 "uuid": "a1bc7910-d188-5cf1-977e-9733d8498ae2", 00:21:56.734 "is_configured": true, 00:21:56.734 "data_offset": 2048, 00:21:56.734 "data_size": 63488 00:21:56.734 }, 00:21:56.734 { 00:21:56.734 "name": "BaseBdev2", 00:21:56.734 "uuid": "4d49b7ab-7aa7-5b08-969d-f42cf3052ea5", 00:21:56.734 "is_configured": true, 00:21:56.734 "data_offset": 2048, 00:21:56.735 "data_size": 63488 00:21:56.735 }, 00:21:56.735 { 00:21:56.735 "name": "BaseBdev3", 00:21:56.735 "uuid": "8b3731b4-83d0-5e03-bc00-930fa04d70bc", 00:21:56.735 "is_configured": true, 00:21:56.735 "data_offset": 2048, 00:21:56.735 "data_size": 63488 00:21:56.735 }, 00:21:56.735 { 00:21:56.735 "name": "BaseBdev4", 00:21:56.735 "uuid": "bf62e970-b915-5612-96ff-55bfdb6763dd", 00:21:56.735 "is_configured": true, 00:21:56.735 "data_offset": 2048, 00:21:56.735 "data_size": 63488 00:21:56.735 } 00:21:56.735 ] 00:21:56.735 }' 00:21:56.735 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.735 19:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.302 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:57.303 19:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:57.303 [2024-06-10 19:06:11.881688] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12bd940 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.239 19:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.498 19:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.498 "name": "raid_bdev1", 00:21:58.498 "uuid": "50110955-125e-46f4-93d5-1f04186c860c", 00:21:58.498 "strip_size_kb": 64, 00:21:58.498 "state": "online", 00:21:58.498 "raid_level": "concat", 00:21:58.498 "superblock": true, 00:21:58.498 "num_base_bdevs": 4, 00:21:58.498 "num_base_bdevs_discovered": 4, 00:21:58.498 "num_base_bdevs_operational": 4, 00:21:58.498 "base_bdevs_list": [ 00:21:58.498 { 00:21:58.498 "name": "BaseBdev1", 00:21:58.498 "uuid": "a1bc7910-d188-5cf1-977e-9733d8498ae2", 00:21:58.498 "is_configured": true, 00:21:58.498 "data_offset": 2048, 00:21:58.498 "data_size": 63488 00:21:58.498 }, 00:21:58.498 { 00:21:58.498 "name": "BaseBdev2", 00:21:58.498 "uuid": "4d49b7ab-7aa7-5b08-969d-f42cf3052ea5", 00:21:58.498 "is_configured": true, 00:21:58.498 "data_offset": 2048, 00:21:58.498 "data_size": 63488 00:21:58.498 }, 00:21:58.498 { 00:21:58.498 "name": "BaseBdev3", 00:21:58.498 "uuid": "8b3731b4-83d0-5e03-bc00-930fa04d70bc", 00:21:58.498 "is_configured": true, 00:21:58.498 "data_offset": 2048, 00:21:58.498 "data_size": 63488 00:21:58.498 }, 00:21:58.498 { 00:21:58.498 "name": "BaseBdev4", 00:21:58.498 "uuid": "bf62e970-b915-5612-96ff-55bfdb6763dd", 00:21:58.498 "is_configured": true, 00:21:58.498 "data_offset": 2048, 00:21:58.498 "data_size": 63488 00:21:58.498 } 00:21:58.498 ] 00:21:58.498 }' 00:21:58.498 19:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.498 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.067 19:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:59.326 [2024-06-10 19:06:13.916056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.326 [2024-06-10 19:06:13.916090] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.326 [2024-06-10 19:06:13.918999] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.326 [2024-06-10 19:06:13.919033] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.326 [2024-06-10 19:06:13.919067] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.326 [2024-06-10 19:06:13.919077] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12bc3a0 name raid_bdev1, state offline 00:21:59.326 0 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1724972 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1724972 ']' 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1724972 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1724972 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1724972' 00:21:59.327 killing process with pid 1724972 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1724972 00:21:59.327 [2024-06-10 19:06:13.995933] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:59.327 19:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1724972 00:21:59.327 [2024-06-10 19:06:14.022009] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8e6fUdr8UI 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:21:59.587 00:21:59.587 real 0m7.115s 00:21:59.587 user 0m11.261s 00:21:59.587 sys 0m1.245s 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:59.587 19:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 ************************************ 00:21:59.587 END TEST raid_write_error_test 00:21:59.587 ************************************ 00:21:59.587 19:06:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:59.587 19:06:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:21:59.587 19:06:14 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:21:59.587 19:06:14 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:59.587 19:06:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 ************************************ 00:21:59.587 START TEST raid_state_function_test 00:21:59.587 ************************************ 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 4 false 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1726329 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1726329' 00:21:59.587 Process raid pid: 1726329 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1726329 /var/tmp/spdk-raid.sock 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 1726329 ']' 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:59.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:59.587 19:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.848 [2024-06-10 19:06:14.377752] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:21:59.848 [2024-06-10 19:06:14.377808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.0 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.1 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.2 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.3 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.4 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.5 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.6 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:01.7 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.0 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.1 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.2 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.3 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.4 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.5 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.6 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b6:02.7 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.0 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.1 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.2 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.3 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.4 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.5 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.6 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:01.7 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.0 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.1 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.2 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.3 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.4 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.5 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.6 cannot be used 00:21:59.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:59.848 EAL: Requested device 0000:b8:02.7 cannot be used 00:21:59.848 [2024-06-10 19:06:14.511445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.848 [2024-06-10 19:06:14.597643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.107 [2024-06-10 19:06:14.656417] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.107 [2024-06-10 19:06:14.656453] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.675 19:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:00.675 19:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:22:00.675 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:00.935 [2024-06-10 19:06:15.478649] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:00.935 [2024-06-10 19:06:15.478691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:00.935 [2024-06-10 19:06:15.478701] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:00.935 [2024-06-10 19:06:15.478712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:00.935 [2024-06-10 19:06:15.478720] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:00.935 [2024-06-10 19:06:15.478735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:00.935 [2024-06-10 19:06:15.478743] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:00.935 [2024-06-10 19:06:15.478757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.935 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.195 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.195 "name": "Existed_Raid", 00:22:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.195 "strip_size_kb": 0, 00:22:01.195 "state": "configuring", 00:22:01.195 "raid_level": "raid1", 00:22:01.195 "superblock": false, 00:22:01.195 "num_base_bdevs": 4, 00:22:01.195 "num_base_bdevs_discovered": 0, 00:22:01.195 "num_base_bdevs_operational": 4, 00:22:01.195 "base_bdevs_list": [ 00:22:01.195 { 00:22:01.195 "name": "BaseBdev1", 00:22:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.195 "is_configured": false, 00:22:01.195 "data_offset": 0, 00:22:01.195 "data_size": 0 00:22:01.195 }, 00:22:01.195 { 00:22:01.195 "name": "BaseBdev2", 00:22:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.195 "is_configured": false, 00:22:01.195 "data_offset": 0, 00:22:01.195 "data_size": 0 00:22:01.195 }, 00:22:01.195 { 00:22:01.195 "name": "BaseBdev3", 00:22:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.195 "is_configured": false, 00:22:01.195 "data_offset": 0, 00:22:01.195 "data_size": 0 00:22:01.195 }, 00:22:01.195 { 00:22:01.195 "name": "BaseBdev4", 00:22:01.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.195 "is_configured": false, 00:22:01.195 "data_offset": 0, 00:22:01.195 "data_size": 0 00:22:01.195 } 00:22:01.195 ] 00:22:01.195 }' 00:22:01.195 19:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.195 19:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.472 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:01.788 [2024-06-10 19:06:16.412993] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:01.788 [2024-06-10 19:06:16.413022] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe6df50 name Existed_Raid, state configuring 00:22:01.788 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:02.047 [2024-06-10 19:06:16.641607] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.047 [2024-06-10 19:06:16.641638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.047 [2024-06-10 19:06:16.641647] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.047 [2024-06-10 19:06:16.641657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.047 [2024-06-10 19:06:16.641670] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:02.047 [2024-06-10 19:06:16.641680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:02.047 [2024-06-10 19:06:16.641689] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:02.047 [2024-06-10 19:06:16.641698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:02.047 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:02.307 [2024-06-10 19:06:16.879823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.307 BaseBdev1 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:02.307 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:02.566 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:02.825 [ 00:22:02.825 { 00:22:02.825 "name": "BaseBdev1", 00:22:02.825 "aliases": [ 00:22:02.825 "93fcef48-85c9-436d-ac89-0ad3a8abb757" 00:22:02.825 ], 00:22:02.825 "product_name": "Malloc disk", 00:22:02.825 "block_size": 512, 00:22:02.825 "num_blocks": 65536, 00:22:02.825 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:02.825 "assigned_rate_limits": { 00:22:02.825 "rw_ios_per_sec": 0, 00:22:02.825 "rw_mbytes_per_sec": 0, 00:22:02.825 "r_mbytes_per_sec": 0, 00:22:02.825 "w_mbytes_per_sec": 0 00:22:02.825 }, 00:22:02.825 "claimed": true, 00:22:02.825 "claim_type": "exclusive_write", 00:22:02.825 "zoned": false, 00:22:02.825 "supported_io_types": { 00:22:02.825 "read": true, 00:22:02.825 "write": true, 00:22:02.825 "unmap": true, 00:22:02.825 "write_zeroes": true, 00:22:02.825 "flush": true, 00:22:02.825 "reset": true, 00:22:02.825 "compare": false, 00:22:02.825 "compare_and_write": false, 00:22:02.825 "abort": true, 00:22:02.825 "nvme_admin": false, 00:22:02.825 "nvme_io": false 00:22:02.825 }, 00:22:02.825 "memory_domains": [ 00:22:02.825 { 00:22:02.825 "dma_device_id": "system", 00:22:02.825 "dma_device_type": 1 00:22:02.825 }, 00:22:02.825 { 00:22:02.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.825 "dma_device_type": 2 00:22:02.825 } 00:22:02.825 ], 00:22:02.825 "driver_specific": {} 00:22:02.825 } 00:22:02.825 ] 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.825 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.826 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.085 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.085 "name": "Existed_Raid", 00:22:03.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.085 "strip_size_kb": 0, 00:22:03.085 "state": "configuring", 00:22:03.085 "raid_level": "raid1", 00:22:03.085 "superblock": false, 00:22:03.085 "num_base_bdevs": 4, 00:22:03.085 "num_base_bdevs_discovered": 1, 00:22:03.085 "num_base_bdevs_operational": 4, 00:22:03.085 "base_bdevs_list": [ 00:22:03.085 { 00:22:03.085 "name": "BaseBdev1", 00:22:03.085 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:03.085 "is_configured": true, 00:22:03.085 "data_offset": 0, 00:22:03.085 "data_size": 65536 00:22:03.085 }, 00:22:03.085 { 00:22:03.085 "name": "BaseBdev2", 00:22:03.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.085 "is_configured": false, 00:22:03.085 "data_offset": 0, 00:22:03.085 "data_size": 0 00:22:03.085 }, 00:22:03.085 { 00:22:03.085 "name": "BaseBdev3", 00:22:03.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.085 "is_configured": false, 00:22:03.085 "data_offset": 0, 00:22:03.085 "data_size": 0 00:22:03.085 }, 00:22:03.085 { 00:22:03.085 "name": "BaseBdev4", 00:22:03.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.085 "is_configured": false, 00:22:03.085 "data_offset": 0, 00:22:03.085 "data_size": 0 00:22:03.085 } 00:22:03.085 ] 00:22:03.085 }' 00:22:03.085 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.085 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.653 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:03.653 [2024-06-10 19:06:18.375762] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:03.653 [2024-06-10 19:06:18.375798] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe6d7c0 name Existed_Raid, state configuring 00:22:03.653 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:03.912 [2024-06-10 19:06:18.600372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.912 [2024-06-10 19:06:18.601775] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:03.912 [2024-06-10 19:06:18.601809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:03.912 [2024-06-10 19:06:18.601818] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:03.912 [2024-06-10 19:06:18.601829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:03.912 [2024-06-10 19:06:18.601837] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:03.912 [2024-06-10 19:06:18.601847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.912 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.172 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.172 "name": "Existed_Raid", 00:22:04.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.172 "strip_size_kb": 0, 00:22:04.172 "state": "configuring", 00:22:04.172 "raid_level": "raid1", 00:22:04.172 "superblock": false, 00:22:04.172 "num_base_bdevs": 4, 00:22:04.172 "num_base_bdevs_discovered": 1, 00:22:04.172 "num_base_bdevs_operational": 4, 00:22:04.172 "base_bdevs_list": [ 00:22:04.172 { 00:22:04.172 "name": "BaseBdev1", 00:22:04.172 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:04.172 "is_configured": true, 00:22:04.172 "data_offset": 0, 00:22:04.172 "data_size": 65536 00:22:04.172 }, 00:22:04.172 { 00:22:04.172 "name": "BaseBdev2", 00:22:04.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.172 "is_configured": false, 00:22:04.172 "data_offset": 0, 00:22:04.172 "data_size": 0 00:22:04.172 }, 00:22:04.172 { 00:22:04.172 "name": "BaseBdev3", 00:22:04.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.172 "is_configured": false, 00:22:04.172 "data_offset": 0, 00:22:04.172 "data_size": 0 00:22:04.172 }, 00:22:04.172 { 00:22:04.172 "name": "BaseBdev4", 00:22:04.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.172 "is_configured": false, 00:22:04.172 "data_offset": 0, 00:22:04.172 "data_size": 0 00:22:04.172 } 00:22:04.172 ] 00:22:04.172 }' 00:22:04.172 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.172 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.739 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:05.008 [2024-06-10 19:06:19.638260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:05.008 BaseBdev2 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:05.008 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:05.277 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:05.537 [ 00:22:05.537 { 00:22:05.537 "name": "BaseBdev2", 00:22:05.537 "aliases": [ 00:22:05.537 "d4ea6b0d-d723-427d-aeed-b2bab47a0586" 00:22:05.537 ], 00:22:05.537 "product_name": "Malloc disk", 00:22:05.537 "block_size": 512, 00:22:05.537 "num_blocks": 65536, 00:22:05.537 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:05.537 "assigned_rate_limits": { 00:22:05.537 "rw_ios_per_sec": 0, 00:22:05.537 "rw_mbytes_per_sec": 0, 00:22:05.537 "r_mbytes_per_sec": 0, 00:22:05.537 "w_mbytes_per_sec": 0 00:22:05.537 }, 00:22:05.537 "claimed": true, 00:22:05.537 "claim_type": "exclusive_write", 00:22:05.537 "zoned": false, 00:22:05.537 "supported_io_types": { 00:22:05.537 "read": true, 00:22:05.537 "write": true, 00:22:05.537 "unmap": true, 00:22:05.537 "write_zeroes": true, 00:22:05.537 "flush": true, 00:22:05.537 "reset": true, 00:22:05.537 "compare": false, 00:22:05.537 "compare_and_write": false, 00:22:05.537 "abort": true, 00:22:05.537 "nvme_admin": false, 00:22:05.537 "nvme_io": false 00:22:05.537 }, 00:22:05.537 "memory_domains": [ 00:22:05.537 { 00:22:05.537 "dma_device_id": "system", 00:22:05.537 "dma_device_type": 1 00:22:05.537 }, 00:22:05.537 { 00:22:05.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.537 "dma_device_type": 2 00:22:05.537 } 00:22:05.537 ], 00:22:05.537 "driver_specific": {} 00:22:05.537 } 00:22:05.537 ] 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.537 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.538 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.538 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.538 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.538 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.796 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.796 "name": "Existed_Raid", 00:22:05.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.796 "strip_size_kb": 0, 00:22:05.796 "state": "configuring", 00:22:05.796 "raid_level": "raid1", 00:22:05.796 "superblock": false, 00:22:05.796 "num_base_bdevs": 4, 00:22:05.796 "num_base_bdevs_discovered": 2, 00:22:05.796 "num_base_bdevs_operational": 4, 00:22:05.796 "base_bdevs_list": [ 00:22:05.796 { 00:22:05.796 "name": "BaseBdev1", 00:22:05.796 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:05.796 "is_configured": true, 00:22:05.796 "data_offset": 0, 00:22:05.796 "data_size": 65536 00:22:05.796 }, 00:22:05.796 { 00:22:05.796 "name": "BaseBdev2", 00:22:05.796 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:05.796 "is_configured": true, 00:22:05.796 "data_offset": 0, 00:22:05.796 "data_size": 65536 00:22:05.796 }, 00:22:05.796 { 00:22:05.796 "name": "BaseBdev3", 00:22:05.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.796 "is_configured": false, 00:22:05.796 "data_offset": 0, 00:22:05.796 "data_size": 0 00:22:05.796 }, 00:22:05.796 { 00:22:05.796 "name": "BaseBdev4", 00:22:05.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.796 "is_configured": false, 00:22:05.796 "data_offset": 0, 00:22:05.796 "data_size": 0 00:22:05.796 } 00:22:05.796 ] 00:22:05.796 }' 00:22:05.796 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.796 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.364 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:06.624 [2024-06-10 19:06:21.133373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:06.624 BaseBdev3 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:06.624 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:06.883 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:06.883 [ 00:22:06.883 { 00:22:06.884 "name": "BaseBdev3", 00:22:06.884 "aliases": [ 00:22:06.884 "9f44fe13-1dc8-4216-a4f8-f2d309a38481" 00:22:06.884 ], 00:22:06.884 "product_name": "Malloc disk", 00:22:06.884 "block_size": 512, 00:22:06.884 "num_blocks": 65536, 00:22:06.884 "uuid": "9f44fe13-1dc8-4216-a4f8-f2d309a38481", 00:22:06.884 "assigned_rate_limits": { 00:22:06.884 "rw_ios_per_sec": 0, 00:22:06.884 "rw_mbytes_per_sec": 0, 00:22:06.884 "r_mbytes_per_sec": 0, 00:22:06.884 "w_mbytes_per_sec": 0 00:22:06.884 }, 00:22:06.884 "claimed": true, 00:22:06.884 "claim_type": "exclusive_write", 00:22:06.884 "zoned": false, 00:22:06.884 "supported_io_types": { 00:22:06.884 "read": true, 00:22:06.884 "write": true, 00:22:06.884 "unmap": true, 00:22:06.884 "write_zeroes": true, 00:22:06.884 "flush": true, 00:22:06.884 "reset": true, 00:22:06.884 "compare": false, 00:22:06.884 "compare_and_write": false, 00:22:06.884 "abort": true, 00:22:06.884 "nvme_admin": false, 00:22:06.884 "nvme_io": false 00:22:06.884 }, 00:22:06.884 "memory_domains": [ 00:22:06.884 { 00:22:06.884 "dma_device_id": "system", 00:22:06.884 "dma_device_type": 1 00:22:06.884 }, 00:22:06.884 { 00:22:06.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.884 "dma_device_type": 2 00:22:06.884 } 00:22:06.884 ], 00:22:06.884 "driver_specific": {} 00:22:06.884 } 00:22:06.884 ] 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.884 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.144 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.144 "name": "Existed_Raid", 00:22:07.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.144 "strip_size_kb": 0, 00:22:07.144 "state": "configuring", 00:22:07.144 "raid_level": "raid1", 00:22:07.144 "superblock": false, 00:22:07.144 "num_base_bdevs": 4, 00:22:07.144 "num_base_bdevs_discovered": 3, 00:22:07.144 "num_base_bdevs_operational": 4, 00:22:07.144 "base_bdevs_list": [ 00:22:07.144 { 00:22:07.144 "name": "BaseBdev1", 00:22:07.144 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:07.144 "is_configured": true, 00:22:07.144 "data_offset": 0, 00:22:07.144 "data_size": 65536 00:22:07.144 }, 00:22:07.144 { 00:22:07.144 "name": "BaseBdev2", 00:22:07.144 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:07.144 "is_configured": true, 00:22:07.144 "data_offset": 0, 00:22:07.144 "data_size": 65536 00:22:07.144 }, 00:22:07.144 { 00:22:07.144 "name": "BaseBdev3", 00:22:07.144 "uuid": "9f44fe13-1dc8-4216-a4f8-f2d309a38481", 00:22:07.144 "is_configured": true, 00:22:07.144 "data_offset": 0, 00:22:07.144 "data_size": 65536 00:22:07.144 }, 00:22:07.144 { 00:22:07.144 "name": "BaseBdev4", 00:22:07.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.144 "is_configured": false, 00:22:07.144 "data_offset": 0, 00:22:07.144 "data_size": 0 00:22:07.144 } 00:22:07.144 ] 00:22:07.144 }' 00:22:07.144 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.144 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.712 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:07.971 [2024-06-10 19:06:22.528179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:07.971 [2024-06-10 19:06:22.528217] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe6e820 00:22:07.971 [2024-06-10 19:06:22.528225] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:07.971 [2024-06-10 19:06:22.528452] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe744a0 00:22:07.971 [2024-06-10 19:06:22.528567] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe6e820 00:22:07.972 [2024-06-10 19:06:22.528588] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe6e820 00:22:07.972 [2024-06-10 19:06:22.528739] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.972 BaseBdev4 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:07.972 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:08.231 [ 00:22:08.231 { 00:22:08.231 "name": "BaseBdev4", 00:22:08.231 "aliases": [ 00:22:08.231 "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f" 00:22:08.231 ], 00:22:08.231 "product_name": "Malloc disk", 00:22:08.231 "block_size": 512, 00:22:08.231 "num_blocks": 65536, 00:22:08.231 "uuid": "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f", 00:22:08.231 "assigned_rate_limits": { 00:22:08.231 "rw_ios_per_sec": 0, 00:22:08.231 "rw_mbytes_per_sec": 0, 00:22:08.231 "r_mbytes_per_sec": 0, 00:22:08.231 "w_mbytes_per_sec": 0 00:22:08.231 }, 00:22:08.231 "claimed": true, 00:22:08.231 "claim_type": "exclusive_write", 00:22:08.231 "zoned": false, 00:22:08.231 "supported_io_types": { 00:22:08.231 "read": true, 00:22:08.231 "write": true, 00:22:08.231 "unmap": true, 00:22:08.231 "write_zeroes": true, 00:22:08.231 "flush": true, 00:22:08.231 "reset": true, 00:22:08.231 "compare": false, 00:22:08.231 "compare_and_write": false, 00:22:08.231 "abort": true, 00:22:08.231 "nvme_admin": false, 00:22:08.231 "nvme_io": false 00:22:08.231 }, 00:22:08.231 "memory_domains": [ 00:22:08.231 { 00:22:08.231 "dma_device_id": "system", 00:22:08.231 "dma_device_type": 1 00:22:08.231 }, 00:22:08.231 { 00:22:08.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.231 "dma_device_type": 2 00:22:08.231 } 00:22:08.231 ], 00:22:08.231 "driver_specific": {} 00:22:08.231 } 00:22:08.231 ] 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.231 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:08.232 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.232 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.232 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.232 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.508 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.508 19:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.508 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.508 "name": "Existed_Raid", 00:22:08.508 "uuid": "45e3e21f-9697-4d52-b59d-aef6ea0cd2f8", 00:22:08.508 "strip_size_kb": 0, 00:22:08.508 "state": "online", 00:22:08.508 "raid_level": "raid1", 00:22:08.508 "superblock": false, 00:22:08.508 "num_base_bdevs": 4, 00:22:08.508 "num_base_bdevs_discovered": 4, 00:22:08.508 "num_base_bdevs_operational": 4, 00:22:08.508 "base_bdevs_list": [ 00:22:08.508 { 00:22:08.508 "name": "BaseBdev1", 00:22:08.508 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:08.508 "is_configured": true, 00:22:08.508 "data_offset": 0, 00:22:08.508 "data_size": 65536 00:22:08.508 }, 00:22:08.508 { 00:22:08.508 "name": "BaseBdev2", 00:22:08.508 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:08.508 "is_configured": true, 00:22:08.508 "data_offset": 0, 00:22:08.508 "data_size": 65536 00:22:08.508 }, 00:22:08.508 { 00:22:08.508 "name": "BaseBdev3", 00:22:08.508 "uuid": "9f44fe13-1dc8-4216-a4f8-f2d309a38481", 00:22:08.508 "is_configured": true, 00:22:08.509 "data_offset": 0, 00:22:08.509 "data_size": 65536 00:22:08.509 }, 00:22:08.509 { 00:22:08.509 "name": "BaseBdev4", 00:22:08.509 "uuid": "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f", 00:22:08.509 "is_configured": true, 00:22:08.509 "data_offset": 0, 00:22:08.509 "data_size": 65536 00:22:08.509 } 00:22:08.509 ] 00:22:08.509 }' 00:22:08.509 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.509 19:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:09.086 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:09.345 [2024-06-10 19:06:23.968249] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.345 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:09.345 "name": "Existed_Raid", 00:22:09.345 "aliases": [ 00:22:09.345 "45e3e21f-9697-4d52-b59d-aef6ea0cd2f8" 00:22:09.345 ], 00:22:09.345 "product_name": "Raid Volume", 00:22:09.345 "block_size": 512, 00:22:09.345 "num_blocks": 65536, 00:22:09.345 "uuid": "45e3e21f-9697-4d52-b59d-aef6ea0cd2f8", 00:22:09.345 "assigned_rate_limits": { 00:22:09.345 "rw_ios_per_sec": 0, 00:22:09.345 "rw_mbytes_per_sec": 0, 00:22:09.346 "r_mbytes_per_sec": 0, 00:22:09.346 "w_mbytes_per_sec": 0 00:22:09.346 }, 00:22:09.346 "claimed": false, 00:22:09.346 "zoned": false, 00:22:09.346 "supported_io_types": { 00:22:09.346 "read": true, 00:22:09.346 "write": true, 00:22:09.346 "unmap": false, 00:22:09.346 "write_zeroes": true, 00:22:09.346 "flush": false, 00:22:09.346 "reset": true, 00:22:09.346 "compare": false, 00:22:09.346 "compare_and_write": false, 00:22:09.346 "abort": false, 00:22:09.346 "nvme_admin": false, 00:22:09.346 "nvme_io": false 00:22:09.346 }, 00:22:09.346 "memory_domains": [ 00:22:09.346 { 00:22:09.346 "dma_device_id": "system", 00:22:09.346 "dma_device_type": 1 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.346 "dma_device_type": 2 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "system", 00:22:09.346 "dma_device_type": 1 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.346 "dma_device_type": 2 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "system", 00:22:09.346 "dma_device_type": 1 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.346 "dma_device_type": 2 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "system", 00:22:09.346 "dma_device_type": 1 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.346 "dma_device_type": 2 00:22:09.346 } 00:22:09.346 ], 00:22:09.346 "driver_specific": { 00:22:09.346 "raid": { 00:22:09.346 "uuid": "45e3e21f-9697-4d52-b59d-aef6ea0cd2f8", 00:22:09.346 "strip_size_kb": 0, 00:22:09.346 "state": "online", 00:22:09.346 "raid_level": "raid1", 00:22:09.346 "superblock": false, 00:22:09.346 "num_base_bdevs": 4, 00:22:09.346 "num_base_bdevs_discovered": 4, 00:22:09.346 "num_base_bdevs_operational": 4, 00:22:09.346 "base_bdevs_list": [ 00:22:09.346 { 00:22:09.346 "name": "BaseBdev1", 00:22:09.346 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:09.346 "is_configured": true, 00:22:09.346 "data_offset": 0, 00:22:09.346 "data_size": 65536 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "name": "BaseBdev2", 00:22:09.346 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:09.346 "is_configured": true, 00:22:09.346 "data_offset": 0, 00:22:09.346 "data_size": 65536 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "name": "BaseBdev3", 00:22:09.346 "uuid": "9f44fe13-1dc8-4216-a4f8-f2d309a38481", 00:22:09.346 "is_configured": true, 00:22:09.346 "data_offset": 0, 00:22:09.346 "data_size": 65536 00:22:09.346 }, 00:22:09.346 { 00:22:09.346 "name": "BaseBdev4", 00:22:09.346 "uuid": "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f", 00:22:09.346 "is_configured": true, 00:22:09.346 "data_offset": 0, 00:22:09.346 "data_size": 65536 00:22:09.346 } 00:22:09.346 ] 00:22:09.346 } 00:22:09.346 } 00:22:09.346 }' 00:22:09.346 19:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.346 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:09.346 BaseBdev2 00:22:09.346 BaseBdev3 00:22:09.346 BaseBdev4' 00:22:09.346 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.346 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:09.346 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.606 "name": "BaseBdev1", 00:22:09.606 "aliases": [ 00:22:09.606 "93fcef48-85c9-436d-ac89-0ad3a8abb757" 00:22:09.606 ], 00:22:09.606 "product_name": "Malloc disk", 00:22:09.606 "block_size": 512, 00:22:09.606 "num_blocks": 65536, 00:22:09.606 "uuid": "93fcef48-85c9-436d-ac89-0ad3a8abb757", 00:22:09.606 "assigned_rate_limits": { 00:22:09.606 "rw_ios_per_sec": 0, 00:22:09.606 "rw_mbytes_per_sec": 0, 00:22:09.606 "r_mbytes_per_sec": 0, 00:22:09.606 "w_mbytes_per_sec": 0 00:22:09.606 }, 00:22:09.606 "claimed": true, 00:22:09.606 "claim_type": "exclusive_write", 00:22:09.606 "zoned": false, 00:22:09.606 "supported_io_types": { 00:22:09.606 "read": true, 00:22:09.606 "write": true, 00:22:09.606 "unmap": true, 00:22:09.606 "write_zeroes": true, 00:22:09.606 "flush": true, 00:22:09.606 "reset": true, 00:22:09.606 "compare": false, 00:22:09.606 "compare_and_write": false, 00:22:09.606 "abort": true, 00:22:09.606 "nvme_admin": false, 00:22:09.606 "nvme_io": false 00:22:09.606 }, 00:22:09.606 "memory_domains": [ 00:22:09.606 { 00:22:09.606 "dma_device_id": "system", 00:22:09.606 "dma_device_type": 1 00:22:09.606 }, 00:22:09.606 { 00:22:09.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.606 "dma_device_type": 2 00:22:09.606 } 00:22:09.606 ], 00:22:09.606 "driver_specific": {} 00:22:09.606 }' 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:09.606 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:09.865 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.124 "name": "BaseBdev2", 00:22:10.124 "aliases": [ 00:22:10.124 "d4ea6b0d-d723-427d-aeed-b2bab47a0586" 00:22:10.124 ], 00:22:10.124 "product_name": "Malloc disk", 00:22:10.124 "block_size": 512, 00:22:10.124 "num_blocks": 65536, 00:22:10.124 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:10.124 "assigned_rate_limits": { 00:22:10.124 "rw_ios_per_sec": 0, 00:22:10.124 "rw_mbytes_per_sec": 0, 00:22:10.124 "r_mbytes_per_sec": 0, 00:22:10.124 "w_mbytes_per_sec": 0 00:22:10.124 }, 00:22:10.124 "claimed": true, 00:22:10.124 "claim_type": "exclusive_write", 00:22:10.124 "zoned": false, 00:22:10.124 "supported_io_types": { 00:22:10.124 "read": true, 00:22:10.124 "write": true, 00:22:10.124 "unmap": true, 00:22:10.124 "write_zeroes": true, 00:22:10.124 "flush": true, 00:22:10.124 "reset": true, 00:22:10.124 "compare": false, 00:22:10.124 "compare_and_write": false, 00:22:10.124 "abort": true, 00:22:10.124 "nvme_admin": false, 00:22:10.124 "nvme_io": false 00:22:10.124 }, 00:22:10.124 "memory_domains": [ 00:22:10.124 { 00:22:10.124 "dma_device_id": "system", 00:22:10.124 "dma_device_type": 1 00:22:10.124 }, 00:22:10.124 { 00:22:10.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.124 "dma_device_type": 2 00:22:10.124 } 00:22:10.124 ], 00:22:10.124 "driver_specific": {} 00:22:10.124 }' 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.124 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.125 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:10.384 19:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.642 "name": "BaseBdev3", 00:22:10.642 "aliases": [ 00:22:10.642 "9f44fe13-1dc8-4216-a4f8-f2d309a38481" 00:22:10.642 ], 00:22:10.642 "product_name": "Malloc disk", 00:22:10.642 "block_size": 512, 00:22:10.642 "num_blocks": 65536, 00:22:10.642 "uuid": "9f44fe13-1dc8-4216-a4f8-f2d309a38481", 00:22:10.642 "assigned_rate_limits": { 00:22:10.642 "rw_ios_per_sec": 0, 00:22:10.642 "rw_mbytes_per_sec": 0, 00:22:10.642 "r_mbytes_per_sec": 0, 00:22:10.642 "w_mbytes_per_sec": 0 00:22:10.642 }, 00:22:10.642 "claimed": true, 00:22:10.642 "claim_type": "exclusive_write", 00:22:10.642 "zoned": false, 00:22:10.642 "supported_io_types": { 00:22:10.642 "read": true, 00:22:10.642 "write": true, 00:22:10.642 "unmap": true, 00:22:10.642 "write_zeroes": true, 00:22:10.642 "flush": true, 00:22:10.642 "reset": true, 00:22:10.642 "compare": false, 00:22:10.642 "compare_and_write": false, 00:22:10.642 "abort": true, 00:22:10.642 "nvme_admin": false, 00:22:10.642 "nvme_io": false 00:22:10.642 }, 00:22:10.642 "memory_domains": [ 00:22:10.642 { 00:22:10.642 "dma_device_id": "system", 00:22:10.642 "dma_device_type": 1 00:22:10.642 }, 00:22:10.642 { 00:22:10.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.642 "dma_device_type": 2 00:22:10.642 } 00:22:10.642 ], 00:22:10.642 "driver_specific": {} 00:22:10.642 }' 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.642 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:10.901 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:11.159 "name": "BaseBdev4", 00:22:11.159 "aliases": [ 00:22:11.159 "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f" 00:22:11.159 ], 00:22:11.159 "product_name": "Malloc disk", 00:22:11.159 "block_size": 512, 00:22:11.159 "num_blocks": 65536, 00:22:11.159 "uuid": "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f", 00:22:11.159 "assigned_rate_limits": { 00:22:11.159 "rw_ios_per_sec": 0, 00:22:11.159 "rw_mbytes_per_sec": 0, 00:22:11.159 "r_mbytes_per_sec": 0, 00:22:11.159 "w_mbytes_per_sec": 0 00:22:11.159 }, 00:22:11.159 "claimed": true, 00:22:11.159 "claim_type": "exclusive_write", 00:22:11.159 "zoned": false, 00:22:11.159 "supported_io_types": { 00:22:11.159 "read": true, 00:22:11.159 "write": true, 00:22:11.159 "unmap": true, 00:22:11.159 "write_zeroes": true, 00:22:11.159 "flush": true, 00:22:11.159 "reset": true, 00:22:11.159 "compare": false, 00:22:11.159 "compare_and_write": false, 00:22:11.159 "abort": true, 00:22:11.159 "nvme_admin": false, 00:22:11.159 "nvme_io": false 00:22:11.159 }, 00:22:11.159 "memory_domains": [ 00:22:11.159 { 00:22:11.159 "dma_device_id": "system", 00:22:11.159 "dma_device_type": 1 00:22:11.159 }, 00:22:11.159 { 00:22:11.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.159 "dma_device_type": 2 00:22:11.159 } 00:22:11.159 ], 00:22:11.159 "driver_specific": {} 00:22:11.159 }' 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:11.159 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:11.418 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:11.418 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:11.418 19:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:11.418 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:11.418 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:11.418 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:11.678 [2024-06-10 19:06:26.258064] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.678 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.937 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.937 "name": "Existed_Raid", 00:22:11.937 "uuid": "45e3e21f-9697-4d52-b59d-aef6ea0cd2f8", 00:22:11.937 "strip_size_kb": 0, 00:22:11.937 "state": "online", 00:22:11.937 "raid_level": "raid1", 00:22:11.937 "superblock": false, 00:22:11.937 "num_base_bdevs": 4, 00:22:11.937 "num_base_bdevs_discovered": 3, 00:22:11.937 "num_base_bdevs_operational": 3, 00:22:11.937 "base_bdevs_list": [ 00:22:11.937 { 00:22:11.937 "name": null, 00:22:11.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.937 "is_configured": false, 00:22:11.937 "data_offset": 0, 00:22:11.937 "data_size": 65536 00:22:11.937 }, 00:22:11.937 { 00:22:11.937 "name": "BaseBdev2", 00:22:11.937 "uuid": "d4ea6b0d-d723-427d-aeed-b2bab47a0586", 00:22:11.937 "is_configured": true, 00:22:11.937 "data_offset": 0, 00:22:11.937 "data_size": 65536 00:22:11.937 }, 00:22:11.937 { 00:22:11.937 "name": "BaseBdev3", 00:22:11.937 "uuid": "9f44fe13-1dc8-4216-a4f8-f2d309a38481", 00:22:11.937 "is_configured": true, 00:22:11.938 "data_offset": 0, 00:22:11.938 "data_size": 65536 00:22:11.938 }, 00:22:11.938 { 00:22:11.938 "name": "BaseBdev4", 00:22:11.938 "uuid": "e4d4b7f2-98ed-4c33-be0e-9220c18d5b9f", 00:22:11.938 "is_configured": true, 00:22:11.938 "data_offset": 0, 00:22:11.938 "data_size": 65536 00:22:11.938 } 00:22:11.938 ] 00:22:11.938 }' 00:22:11.938 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.938 19:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.506 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:12.506 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:12.506 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.506 19:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:12.506 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:12.506 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:12.506 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:12.765 [2024-06-10 19:06:27.406195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:12.766 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:12.766 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:12.766 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:12.766 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.024 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:13.024 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:13.024 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:13.284 [2024-06-10 19:06:27.849773] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:13.284 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:13.284 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:13.284 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.284 19:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:13.543 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:13.543 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:13.543 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:13.803 [2024-06-10 19:06:28.301065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:13.803 [2024-06-10 19:06:28.301130] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:13.803 [2024-06-10 19:06:28.311429] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:13.803 [2024-06-10 19:06:28.311459] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:13.803 [2024-06-10 19:06:28.311469] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe6e820 name Existed_Raid, state offline 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:13.803 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:14.062 BaseBdev2 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:14.062 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.321 19:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:14.581 [ 00:22:14.581 { 00:22:14.581 "name": "BaseBdev2", 00:22:14.581 "aliases": [ 00:22:14.581 "aa4801b1-30a7-4f71-a17a-5838f44300e9" 00:22:14.581 ], 00:22:14.581 "product_name": "Malloc disk", 00:22:14.581 "block_size": 512, 00:22:14.581 "num_blocks": 65536, 00:22:14.581 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:14.581 "assigned_rate_limits": { 00:22:14.581 "rw_ios_per_sec": 0, 00:22:14.581 "rw_mbytes_per_sec": 0, 00:22:14.581 "r_mbytes_per_sec": 0, 00:22:14.581 "w_mbytes_per_sec": 0 00:22:14.581 }, 00:22:14.581 "claimed": false, 00:22:14.581 "zoned": false, 00:22:14.581 "supported_io_types": { 00:22:14.581 "read": true, 00:22:14.581 "write": true, 00:22:14.581 "unmap": true, 00:22:14.581 "write_zeroes": true, 00:22:14.581 "flush": true, 00:22:14.581 "reset": true, 00:22:14.581 "compare": false, 00:22:14.581 "compare_and_write": false, 00:22:14.581 "abort": true, 00:22:14.581 "nvme_admin": false, 00:22:14.581 "nvme_io": false 00:22:14.581 }, 00:22:14.581 "memory_domains": [ 00:22:14.581 { 00:22:14.581 "dma_device_id": "system", 00:22:14.581 "dma_device_type": 1 00:22:14.581 }, 00:22:14.581 { 00:22:14.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.581 "dma_device_type": 2 00:22:14.581 } 00:22:14.581 ], 00:22:14.581 "driver_specific": {} 00:22:14.581 } 00:22:14.581 ] 00:22:14.581 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:14.581 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:14.581 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:14.581 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:14.840 BaseBdev3 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:14.840 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.099 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:15.359 [ 00:22:15.359 { 00:22:15.359 "name": "BaseBdev3", 00:22:15.359 "aliases": [ 00:22:15.359 "13b763ee-c38d-425c-b6c4-2cb121abec4f" 00:22:15.359 ], 00:22:15.359 "product_name": "Malloc disk", 00:22:15.359 "block_size": 512, 00:22:15.359 "num_blocks": 65536, 00:22:15.359 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:15.359 "assigned_rate_limits": { 00:22:15.359 "rw_ios_per_sec": 0, 00:22:15.359 "rw_mbytes_per_sec": 0, 00:22:15.359 "r_mbytes_per_sec": 0, 00:22:15.359 "w_mbytes_per_sec": 0 00:22:15.359 }, 00:22:15.359 "claimed": false, 00:22:15.359 "zoned": false, 00:22:15.359 "supported_io_types": { 00:22:15.359 "read": true, 00:22:15.359 "write": true, 00:22:15.359 "unmap": true, 00:22:15.359 "write_zeroes": true, 00:22:15.359 "flush": true, 00:22:15.359 "reset": true, 00:22:15.359 "compare": false, 00:22:15.359 "compare_and_write": false, 00:22:15.359 "abort": true, 00:22:15.359 "nvme_admin": false, 00:22:15.359 "nvme_io": false 00:22:15.359 }, 00:22:15.359 "memory_domains": [ 00:22:15.359 { 00:22:15.359 "dma_device_id": "system", 00:22:15.359 "dma_device_type": 1 00:22:15.359 }, 00:22:15.359 { 00:22:15.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.359 "dma_device_type": 2 00:22:15.359 } 00:22:15.359 ], 00:22:15.359 "driver_specific": {} 00:22:15.359 } 00:22:15.359 ] 00:22:15.359 19:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:15.359 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:15.359 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:15.359 19:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:15.359 BaseBdev4 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:15.359 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.619 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:15.878 [ 00:22:15.878 { 00:22:15.878 "name": "BaseBdev4", 00:22:15.878 "aliases": [ 00:22:15.878 "de55bed1-f899-4c7a-a2c7-ec2f1c059936" 00:22:15.878 ], 00:22:15.878 "product_name": "Malloc disk", 00:22:15.878 "block_size": 512, 00:22:15.878 "num_blocks": 65536, 00:22:15.878 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:15.878 "assigned_rate_limits": { 00:22:15.878 "rw_ios_per_sec": 0, 00:22:15.878 "rw_mbytes_per_sec": 0, 00:22:15.878 "r_mbytes_per_sec": 0, 00:22:15.878 "w_mbytes_per_sec": 0 00:22:15.878 }, 00:22:15.878 "claimed": false, 00:22:15.878 "zoned": false, 00:22:15.878 "supported_io_types": { 00:22:15.878 "read": true, 00:22:15.878 "write": true, 00:22:15.878 "unmap": true, 00:22:15.878 "write_zeroes": true, 00:22:15.878 "flush": true, 00:22:15.878 "reset": true, 00:22:15.878 "compare": false, 00:22:15.878 "compare_and_write": false, 00:22:15.878 "abort": true, 00:22:15.878 "nvme_admin": false, 00:22:15.878 "nvme_io": false 00:22:15.878 }, 00:22:15.878 "memory_domains": [ 00:22:15.878 { 00:22:15.878 "dma_device_id": "system", 00:22:15.878 "dma_device_type": 1 00:22:15.878 }, 00:22:15.878 { 00:22:15.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.878 "dma_device_type": 2 00:22:15.878 } 00:22:15.878 ], 00:22:15.878 "driver_specific": {} 00:22:15.878 } 00:22:15.878 ] 00:22:15.878 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:15.878 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:15.878 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:15.878 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:16.138 [2024-06-10 19:06:30.754701] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:16.138 [2024-06-10 19:06:30.754737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:16.138 [2024-06-10 19:06:30.754754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.138 [2024-06-10 19:06:30.755989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:16.138 [2024-06-10 19:06:30.756030] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.138 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.398 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.398 "name": "Existed_Raid", 00:22:16.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.398 "strip_size_kb": 0, 00:22:16.398 "state": "configuring", 00:22:16.398 "raid_level": "raid1", 00:22:16.398 "superblock": false, 00:22:16.398 "num_base_bdevs": 4, 00:22:16.398 "num_base_bdevs_discovered": 3, 00:22:16.398 "num_base_bdevs_operational": 4, 00:22:16.398 "base_bdevs_list": [ 00:22:16.398 { 00:22:16.398 "name": "BaseBdev1", 00:22:16.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.398 "is_configured": false, 00:22:16.398 "data_offset": 0, 00:22:16.398 "data_size": 0 00:22:16.398 }, 00:22:16.398 { 00:22:16.398 "name": "BaseBdev2", 00:22:16.398 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:16.398 "is_configured": true, 00:22:16.398 "data_offset": 0, 00:22:16.398 "data_size": 65536 00:22:16.398 }, 00:22:16.398 { 00:22:16.398 "name": "BaseBdev3", 00:22:16.398 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:16.398 "is_configured": true, 00:22:16.398 "data_offset": 0, 00:22:16.398 "data_size": 65536 00:22:16.398 }, 00:22:16.398 { 00:22:16.398 "name": "BaseBdev4", 00:22:16.398 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:16.398 "is_configured": true, 00:22:16.398 "data_offset": 0, 00:22:16.398 "data_size": 65536 00:22:16.398 } 00:22:16.398 ] 00:22:16.398 }' 00:22:16.398 19:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.398 19:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.967 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:17.226 [2024-06-10 19:06:31.781393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:17.226 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.227 19:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.486 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.486 "name": "Existed_Raid", 00:22:17.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.486 "strip_size_kb": 0, 00:22:17.486 "state": "configuring", 00:22:17.486 "raid_level": "raid1", 00:22:17.486 "superblock": false, 00:22:17.486 "num_base_bdevs": 4, 00:22:17.486 "num_base_bdevs_discovered": 2, 00:22:17.486 "num_base_bdevs_operational": 4, 00:22:17.486 "base_bdevs_list": [ 00:22:17.486 { 00:22:17.486 "name": "BaseBdev1", 00:22:17.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.486 "is_configured": false, 00:22:17.486 "data_offset": 0, 00:22:17.486 "data_size": 0 00:22:17.486 }, 00:22:17.486 { 00:22:17.486 "name": null, 00:22:17.486 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:17.486 "is_configured": false, 00:22:17.486 "data_offset": 0, 00:22:17.486 "data_size": 65536 00:22:17.486 }, 00:22:17.486 { 00:22:17.486 "name": "BaseBdev3", 00:22:17.486 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:17.486 "is_configured": true, 00:22:17.486 "data_offset": 0, 00:22:17.486 "data_size": 65536 00:22:17.486 }, 00:22:17.486 { 00:22:17.486 "name": "BaseBdev4", 00:22:17.486 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:17.486 "is_configured": true, 00:22:17.486 "data_offset": 0, 00:22:17.486 "data_size": 65536 00:22:17.486 } 00:22:17.486 ] 00:22:17.486 }' 00:22:17.486 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.486 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.052 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.052 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:18.052 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:18.052 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:18.311 [2024-06-10 19:06:32.963713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.311 BaseBdev1 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:18.311 19:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.571 19:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:18.830 [ 00:22:18.830 { 00:22:18.830 "name": "BaseBdev1", 00:22:18.830 "aliases": [ 00:22:18.830 "70d34e31-52c0-44b8-9676-1f101d2f6376" 00:22:18.830 ], 00:22:18.830 "product_name": "Malloc disk", 00:22:18.830 "block_size": 512, 00:22:18.830 "num_blocks": 65536, 00:22:18.830 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:18.830 "assigned_rate_limits": { 00:22:18.830 "rw_ios_per_sec": 0, 00:22:18.830 "rw_mbytes_per_sec": 0, 00:22:18.830 "r_mbytes_per_sec": 0, 00:22:18.830 "w_mbytes_per_sec": 0 00:22:18.830 }, 00:22:18.830 "claimed": true, 00:22:18.830 "claim_type": "exclusive_write", 00:22:18.830 "zoned": false, 00:22:18.830 "supported_io_types": { 00:22:18.830 "read": true, 00:22:18.830 "write": true, 00:22:18.830 "unmap": true, 00:22:18.830 "write_zeroes": true, 00:22:18.830 "flush": true, 00:22:18.830 "reset": true, 00:22:18.830 "compare": false, 00:22:18.830 "compare_and_write": false, 00:22:18.830 "abort": true, 00:22:18.830 "nvme_admin": false, 00:22:18.830 "nvme_io": false 00:22:18.830 }, 00:22:18.830 "memory_domains": [ 00:22:18.830 { 00:22:18.830 "dma_device_id": "system", 00:22:18.830 "dma_device_type": 1 00:22:18.830 }, 00:22:18.830 { 00:22:18.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.830 "dma_device_type": 2 00:22:18.830 } 00:22:18.830 ], 00:22:18.830 "driver_specific": {} 00:22:18.830 } 00:22:18.830 ] 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.830 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.089 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.089 "name": "Existed_Raid", 00:22:19.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.089 "strip_size_kb": 0, 00:22:19.089 "state": "configuring", 00:22:19.089 "raid_level": "raid1", 00:22:19.089 "superblock": false, 00:22:19.089 "num_base_bdevs": 4, 00:22:19.089 "num_base_bdevs_discovered": 3, 00:22:19.089 "num_base_bdevs_operational": 4, 00:22:19.089 "base_bdevs_list": [ 00:22:19.089 { 00:22:19.089 "name": "BaseBdev1", 00:22:19.089 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:19.089 "is_configured": true, 00:22:19.089 "data_offset": 0, 00:22:19.089 "data_size": 65536 00:22:19.089 }, 00:22:19.089 { 00:22:19.089 "name": null, 00:22:19.089 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:19.089 "is_configured": false, 00:22:19.089 "data_offset": 0, 00:22:19.089 "data_size": 65536 00:22:19.089 }, 00:22:19.089 { 00:22:19.089 "name": "BaseBdev3", 00:22:19.089 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:19.089 "is_configured": true, 00:22:19.089 "data_offset": 0, 00:22:19.089 "data_size": 65536 00:22:19.089 }, 00:22:19.089 { 00:22:19.089 "name": "BaseBdev4", 00:22:19.089 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:19.089 "is_configured": true, 00:22:19.089 "data_offset": 0, 00:22:19.089 "data_size": 65536 00:22:19.089 } 00:22:19.089 ] 00:22:19.089 }' 00:22:19.089 19:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.089 19:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.657 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.657 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:19.916 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:19.916 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:19.916 [2024-06-10 19:06:34.656202] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.176 "name": "Existed_Raid", 00:22:20.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.176 "strip_size_kb": 0, 00:22:20.176 "state": "configuring", 00:22:20.176 "raid_level": "raid1", 00:22:20.176 "superblock": false, 00:22:20.176 "num_base_bdevs": 4, 00:22:20.176 "num_base_bdevs_discovered": 2, 00:22:20.176 "num_base_bdevs_operational": 4, 00:22:20.176 "base_bdevs_list": [ 00:22:20.176 { 00:22:20.176 "name": "BaseBdev1", 00:22:20.176 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:20.176 "is_configured": true, 00:22:20.176 "data_offset": 0, 00:22:20.176 "data_size": 65536 00:22:20.176 }, 00:22:20.176 { 00:22:20.176 "name": null, 00:22:20.176 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:20.176 "is_configured": false, 00:22:20.176 "data_offset": 0, 00:22:20.176 "data_size": 65536 00:22:20.176 }, 00:22:20.176 { 00:22:20.176 "name": null, 00:22:20.176 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:20.176 "is_configured": false, 00:22:20.176 "data_offset": 0, 00:22:20.176 "data_size": 65536 00:22:20.176 }, 00:22:20.176 { 00:22:20.176 "name": "BaseBdev4", 00:22:20.176 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:20.176 "is_configured": true, 00:22:20.176 "data_offset": 0, 00:22:20.176 "data_size": 65536 00:22:20.176 } 00:22:20.176 ] 00:22:20.176 }' 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.176 19:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.744 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.744 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:21.003 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:21.003 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:21.262 [2024-06-10 19:06:35.819493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.262 19:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.521 19:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.521 "name": "Existed_Raid", 00:22:21.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.521 "strip_size_kb": 0, 00:22:21.521 "state": "configuring", 00:22:21.521 "raid_level": "raid1", 00:22:21.521 "superblock": false, 00:22:21.521 "num_base_bdevs": 4, 00:22:21.521 "num_base_bdevs_discovered": 3, 00:22:21.521 "num_base_bdevs_operational": 4, 00:22:21.521 "base_bdevs_list": [ 00:22:21.521 { 00:22:21.521 "name": "BaseBdev1", 00:22:21.521 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:21.521 "is_configured": true, 00:22:21.521 "data_offset": 0, 00:22:21.521 "data_size": 65536 00:22:21.521 }, 00:22:21.521 { 00:22:21.521 "name": null, 00:22:21.521 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:21.521 "is_configured": false, 00:22:21.521 "data_offset": 0, 00:22:21.521 "data_size": 65536 00:22:21.521 }, 00:22:21.521 { 00:22:21.521 "name": "BaseBdev3", 00:22:21.521 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:21.521 "is_configured": true, 00:22:21.521 "data_offset": 0, 00:22:21.521 "data_size": 65536 00:22:21.521 }, 00:22:21.521 { 00:22:21.521 "name": "BaseBdev4", 00:22:21.521 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:21.521 "is_configured": true, 00:22:21.521 "data_offset": 0, 00:22:21.521 "data_size": 65536 00:22:21.521 } 00:22:21.521 ] 00:22:21.521 }' 00:22:21.521 19:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.521 19:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.089 19:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.089 19:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:22.089 19:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:22.089 19:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:22.349 [2024-06-10 19:06:36.990596] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.349 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.608 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.608 "name": "Existed_Raid", 00:22:22.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.608 "strip_size_kb": 0, 00:22:22.608 "state": "configuring", 00:22:22.608 "raid_level": "raid1", 00:22:22.608 "superblock": false, 00:22:22.608 "num_base_bdevs": 4, 00:22:22.608 "num_base_bdevs_discovered": 2, 00:22:22.608 "num_base_bdevs_operational": 4, 00:22:22.608 "base_bdevs_list": [ 00:22:22.608 { 00:22:22.608 "name": null, 00:22:22.608 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:22.608 "is_configured": false, 00:22:22.608 "data_offset": 0, 00:22:22.608 "data_size": 65536 00:22:22.608 }, 00:22:22.608 { 00:22:22.608 "name": null, 00:22:22.608 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:22.608 "is_configured": false, 00:22:22.608 "data_offset": 0, 00:22:22.608 "data_size": 65536 00:22:22.608 }, 00:22:22.608 { 00:22:22.608 "name": "BaseBdev3", 00:22:22.608 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:22.608 "is_configured": true, 00:22:22.608 "data_offset": 0, 00:22:22.608 "data_size": 65536 00:22:22.608 }, 00:22:22.608 { 00:22:22.608 "name": "BaseBdev4", 00:22:22.608 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:22.608 "is_configured": true, 00:22:22.608 "data_offset": 0, 00:22:22.608 "data_size": 65536 00:22:22.608 } 00:22:22.608 ] 00:22:22.608 }' 00:22:22.608 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.608 19:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.251 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.251 19:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:23.520 [2024-06-10 19:06:38.243430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.520 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.778 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.778 "name": "Existed_Raid", 00:22:23.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.778 "strip_size_kb": 0, 00:22:23.778 "state": "configuring", 00:22:23.778 "raid_level": "raid1", 00:22:23.778 "superblock": false, 00:22:23.778 "num_base_bdevs": 4, 00:22:23.778 "num_base_bdevs_discovered": 3, 00:22:23.778 "num_base_bdevs_operational": 4, 00:22:23.778 "base_bdevs_list": [ 00:22:23.778 { 00:22:23.778 "name": null, 00:22:23.778 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:23.778 "is_configured": false, 00:22:23.778 "data_offset": 0, 00:22:23.778 "data_size": 65536 00:22:23.778 }, 00:22:23.778 { 00:22:23.778 "name": "BaseBdev2", 00:22:23.778 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:23.778 "is_configured": true, 00:22:23.778 "data_offset": 0, 00:22:23.778 "data_size": 65536 00:22:23.778 }, 00:22:23.778 { 00:22:23.778 "name": "BaseBdev3", 00:22:23.778 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:23.778 "is_configured": true, 00:22:23.778 "data_offset": 0, 00:22:23.778 "data_size": 65536 00:22:23.778 }, 00:22:23.778 { 00:22:23.778 "name": "BaseBdev4", 00:22:23.778 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:23.778 "is_configured": true, 00:22:23.778 "data_offset": 0, 00:22:23.778 "data_size": 65536 00:22:23.778 } 00:22:23.778 ] 00:22:23.778 }' 00:22:23.778 19:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.778 19:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.346 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.346 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:24.606 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:24.606 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.606 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:24.865 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 70d34e31-52c0-44b8-9676-1f101d2f6376 00:22:25.124 [2024-06-10 19:06:39.734638] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:25.124 [2024-06-10 19:06:39.734673] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe708b0 00:22:25.124 [2024-06-10 19:06:39.734680] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:25.124 [2024-06-10 19:06:39.734858] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd82f80 00:22:25.124 [2024-06-10 19:06:39.734970] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe708b0 00:22:25.124 [2024-06-10 19:06:39.734979] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe708b0 00:22:25.124 [2024-06-10 19:06:39.735125] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.124 NewBaseBdev 00:22:25.124 19:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:25.124 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:22:25.124 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:25.124 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:22:25.124 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:25.124 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:25.125 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.384 19:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:25.643 [ 00:22:25.643 { 00:22:25.643 "name": "NewBaseBdev", 00:22:25.643 "aliases": [ 00:22:25.643 "70d34e31-52c0-44b8-9676-1f101d2f6376" 00:22:25.643 ], 00:22:25.643 "product_name": "Malloc disk", 00:22:25.644 "block_size": 512, 00:22:25.644 "num_blocks": 65536, 00:22:25.644 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:25.644 "assigned_rate_limits": { 00:22:25.644 "rw_ios_per_sec": 0, 00:22:25.644 "rw_mbytes_per_sec": 0, 00:22:25.644 "r_mbytes_per_sec": 0, 00:22:25.644 "w_mbytes_per_sec": 0 00:22:25.644 }, 00:22:25.644 "claimed": true, 00:22:25.644 "claim_type": "exclusive_write", 00:22:25.644 "zoned": false, 00:22:25.644 "supported_io_types": { 00:22:25.644 "read": true, 00:22:25.644 "write": true, 00:22:25.644 "unmap": true, 00:22:25.644 "write_zeroes": true, 00:22:25.644 "flush": true, 00:22:25.644 "reset": true, 00:22:25.644 "compare": false, 00:22:25.644 "compare_and_write": false, 00:22:25.644 "abort": true, 00:22:25.644 "nvme_admin": false, 00:22:25.644 "nvme_io": false 00:22:25.644 }, 00:22:25.644 "memory_domains": [ 00:22:25.644 { 00:22:25.644 "dma_device_id": "system", 00:22:25.644 "dma_device_type": 1 00:22:25.644 }, 00:22:25.644 { 00:22:25.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.644 "dma_device_type": 2 00:22:25.644 } 00:22:25.644 ], 00:22:25.644 "driver_specific": {} 00:22:25.644 } 00:22:25.644 ] 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.644 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.903 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.903 "name": "Existed_Raid", 00:22:25.903 "uuid": "55f515d1-1c50-4a75-8a03-6e9cc7ca905b", 00:22:25.903 "strip_size_kb": 0, 00:22:25.903 "state": "online", 00:22:25.903 "raid_level": "raid1", 00:22:25.903 "superblock": false, 00:22:25.903 "num_base_bdevs": 4, 00:22:25.903 "num_base_bdevs_discovered": 4, 00:22:25.903 "num_base_bdevs_operational": 4, 00:22:25.903 "base_bdevs_list": [ 00:22:25.903 { 00:22:25.903 "name": "NewBaseBdev", 00:22:25.903 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:25.904 "is_configured": true, 00:22:25.904 "data_offset": 0, 00:22:25.904 "data_size": 65536 00:22:25.904 }, 00:22:25.904 { 00:22:25.904 "name": "BaseBdev2", 00:22:25.904 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:25.904 "is_configured": true, 00:22:25.904 "data_offset": 0, 00:22:25.904 "data_size": 65536 00:22:25.904 }, 00:22:25.904 { 00:22:25.904 "name": "BaseBdev3", 00:22:25.904 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:25.904 "is_configured": true, 00:22:25.904 "data_offset": 0, 00:22:25.904 "data_size": 65536 00:22:25.904 }, 00:22:25.904 { 00:22:25.904 "name": "BaseBdev4", 00:22:25.904 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:25.904 "is_configured": true, 00:22:25.904 "data_offset": 0, 00:22:25.904 "data_size": 65536 00:22:25.904 } 00:22:25.904 ] 00:22:25.904 }' 00:22:25.904 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.904 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:26.472 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:26.472 [2024-06-10 19:06:41.222965] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.731 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:26.731 "name": "Existed_Raid", 00:22:26.731 "aliases": [ 00:22:26.731 "55f515d1-1c50-4a75-8a03-6e9cc7ca905b" 00:22:26.731 ], 00:22:26.731 "product_name": "Raid Volume", 00:22:26.731 "block_size": 512, 00:22:26.731 "num_blocks": 65536, 00:22:26.731 "uuid": "55f515d1-1c50-4a75-8a03-6e9cc7ca905b", 00:22:26.731 "assigned_rate_limits": { 00:22:26.731 "rw_ios_per_sec": 0, 00:22:26.731 "rw_mbytes_per_sec": 0, 00:22:26.731 "r_mbytes_per_sec": 0, 00:22:26.731 "w_mbytes_per_sec": 0 00:22:26.731 }, 00:22:26.731 "claimed": false, 00:22:26.731 "zoned": false, 00:22:26.731 "supported_io_types": { 00:22:26.731 "read": true, 00:22:26.731 "write": true, 00:22:26.731 "unmap": false, 00:22:26.731 "write_zeroes": true, 00:22:26.731 "flush": false, 00:22:26.731 "reset": true, 00:22:26.731 "compare": false, 00:22:26.731 "compare_and_write": false, 00:22:26.731 "abort": false, 00:22:26.731 "nvme_admin": false, 00:22:26.731 "nvme_io": false 00:22:26.731 }, 00:22:26.731 "memory_domains": [ 00:22:26.731 { 00:22:26.731 "dma_device_id": "system", 00:22:26.731 "dma_device_type": 1 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.731 "dma_device_type": 2 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "system", 00:22:26.731 "dma_device_type": 1 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.731 "dma_device_type": 2 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "system", 00:22:26.731 "dma_device_type": 1 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.731 "dma_device_type": 2 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "system", 00:22:26.731 "dma_device_type": 1 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.731 "dma_device_type": 2 00:22:26.731 } 00:22:26.731 ], 00:22:26.731 "driver_specific": { 00:22:26.731 "raid": { 00:22:26.731 "uuid": "55f515d1-1c50-4a75-8a03-6e9cc7ca905b", 00:22:26.731 "strip_size_kb": 0, 00:22:26.731 "state": "online", 00:22:26.731 "raid_level": "raid1", 00:22:26.731 "superblock": false, 00:22:26.731 "num_base_bdevs": 4, 00:22:26.731 "num_base_bdevs_discovered": 4, 00:22:26.731 "num_base_bdevs_operational": 4, 00:22:26.731 "base_bdevs_list": [ 00:22:26.731 { 00:22:26.731 "name": "NewBaseBdev", 00:22:26.731 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:26.731 "is_configured": true, 00:22:26.731 "data_offset": 0, 00:22:26.731 "data_size": 65536 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "name": "BaseBdev2", 00:22:26.731 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:26.731 "is_configured": true, 00:22:26.731 "data_offset": 0, 00:22:26.731 "data_size": 65536 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "name": "BaseBdev3", 00:22:26.731 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:26.731 "is_configured": true, 00:22:26.731 "data_offset": 0, 00:22:26.731 "data_size": 65536 00:22:26.731 }, 00:22:26.731 { 00:22:26.731 "name": "BaseBdev4", 00:22:26.731 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:26.732 "is_configured": true, 00:22:26.732 "data_offset": 0, 00:22:26.732 "data_size": 65536 00:22:26.732 } 00:22:26.732 ] 00:22:26.732 } 00:22:26.732 } 00:22:26.732 }' 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:26.732 BaseBdev2 00:22:26.732 BaseBdev3 00:22:26.732 BaseBdev4' 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.732 "name": "NewBaseBdev", 00:22:26.732 "aliases": [ 00:22:26.732 "70d34e31-52c0-44b8-9676-1f101d2f6376" 00:22:26.732 ], 00:22:26.732 "product_name": "Malloc disk", 00:22:26.732 "block_size": 512, 00:22:26.732 "num_blocks": 65536, 00:22:26.732 "uuid": "70d34e31-52c0-44b8-9676-1f101d2f6376", 00:22:26.732 "assigned_rate_limits": { 00:22:26.732 "rw_ios_per_sec": 0, 00:22:26.732 "rw_mbytes_per_sec": 0, 00:22:26.732 "r_mbytes_per_sec": 0, 00:22:26.732 "w_mbytes_per_sec": 0 00:22:26.732 }, 00:22:26.732 "claimed": true, 00:22:26.732 "claim_type": "exclusive_write", 00:22:26.732 "zoned": false, 00:22:26.732 "supported_io_types": { 00:22:26.732 "read": true, 00:22:26.732 "write": true, 00:22:26.732 "unmap": true, 00:22:26.732 "write_zeroes": true, 00:22:26.732 "flush": true, 00:22:26.732 "reset": true, 00:22:26.732 "compare": false, 00:22:26.732 "compare_and_write": false, 00:22:26.732 "abort": true, 00:22:26.732 "nvme_admin": false, 00:22:26.732 "nvme_io": false 00:22:26.732 }, 00:22:26.732 "memory_domains": [ 00:22:26.732 { 00:22:26.732 "dma_device_id": "system", 00:22:26.732 "dma_device_type": 1 00:22:26.732 }, 00:22:26.732 { 00:22:26.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.732 "dma_device_type": 2 00:22:26.732 } 00:22:26.732 ], 00:22:26.732 "driver_specific": {} 00:22:26.732 }' 00:22:26.732 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.991 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.250 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.250 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.250 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.250 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:27.250 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.509 "name": "BaseBdev2", 00:22:27.509 "aliases": [ 00:22:27.509 "aa4801b1-30a7-4f71-a17a-5838f44300e9" 00:22:27.509 ], 00:22:27.509 "product_name": "Malloc disk", 00:22:27.509 "block_size": 512, 00:22:27.509 "num_blocks": 65536, 00:22:27.509 "uuid": "aa4801b1-30a7-4f71-a17a-5838f44300e9", 00:22:27.509 "assigned_rate_limits": { 00:22:27.509 "rw_ios_per_sec": 0, 00:22:27.509 "rw_mbytes_per_sec": 0, 00:22:27.509 "r_mbytes_per_sec": 0, 00:22:27.509 "w_mbytes_per_sec": 0 00:22:27.509 }, 00:22:27.509 "claimed": true, 00:22:27.509 "claim_type": "exclusive_write", 00:22:27.509 "zoned": false, 00:22:27.509 "supported_io_types": { 00:22:27.509 "read": true, 00:22:27.509 "write": true, 00:22:27.509 "unmap": true, 00:22:27.509 "write_zeroes": true, 00:22:27.509 "flush": true, 00:22:27.509 "reset": true, 00:22:27.509 "compare": false, 00:22:27.509 "compare_and_write": false, 00:22:27.509 "abort": true, 00:22:27.509 "nvme_admin": false, 00:22:27.509 "nvme_io": false 00:22:27.509 }, 00:22:27.509 "memory_domains": [ 00:22:27.509 { 00:22:27.509 "dma_device_id": "system", 00:22:27.509 "dma_device_type": 1 00:22:27.509 }, 00:22:27.509 { 00:22:27.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.509 "dma_device_type": 2 00:22:27.509 } 00:22:27.509 ], 00:22:27.509 "driver_specific": {} 00:22:27.509 }' 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.509 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.768 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:27.768 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.768 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.769 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.769 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.769 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:27.769 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.028 "name": "BaseBdev3", 00:22:28.028 "aliases": [ 00:22:28.028 "13b763ee-c38d-425c-b6c4-2cb121abec4f" 00:22:28.028 ], 00:22:28.028 "product_name": "Malloc disk", 00:22:28.028 "block_size": 512, 00:22:28.028 "num_blocks": 65536, 00:22:28.028 "uuid": "13b763ee-c38d-425c-b6c4-2cb121abec4f", 00:22:28.028 "assigned_rate_limits": { 00:22:28.028 "rw_ios_per_sec": 0, 00:22:28.028 "rw_mbytes_per_sec": 0, 00:22:28.028 "r_mbytes_per_sec": 0, 00:22:28.028 "w_mbytes_per_sec": 0 00:22:28.028 }, 00:22:28.028 "claimed": true, 00:22:28.028 "claim_type": "exclusive_write", 00:22:28.028 "zoned": false, 00:22:28.028 "supported_io_types": { 00:22:28.028 "read": true, 00:22:28.028 "write": true, 00:22:28.028 "unmap": true, 00:22:28.028 "write_zeroes": true, 00:22:28.028 "flush": true, 00:22:28.028 "reset": true, 00:22:28.028 "compare": false, 00:22:28.028 "compare_and_write": false, 00:22:28.028 "abort": true, 00:22:28.028 "nvme_admin": false, 00:22:28.028 "nvme_io": false 00:22:28.028 }, 00:22:28.028 "memory_domains": [ 00:22:28.028 { 00:22:28.028 "dma_device_id": "system", 00:22:28.028 "dma_device_type": 1 00:22:28.028 }, 00:22:28.028 { 00:22:28.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.028 "dma_device_type": 2 00:22:28.028 } 00:22:28.028 ], 00:22:28.028 "driver_specific": {} 00:22:28.028 }' 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.028 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:28.288 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.547 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.547 "name": "BaseBdev4", 00:22:28.547 "aliases": [ 00:22:28.547 "de55bed1-f899-4c7a-a2c7-ec2f1c059936" 00:22:28.547 ], 00:22:28.547 "product_name": "Malloc disk", 00:22:28.547 "block_size": 512, 00:22:28.547 "num_blocks": 65536, 00:22:28.547 "uuid": "de55bed1-f899-4c7a-a2c7-ec2f1c059936", 00:22:28.547 "assigned_rate_limits": { 00:22:28.547 "rw_ios_per_sec": 0, 00:22:28.547 "rw_mbytes_per_sec": 0, 00:22:28.547 "r_mbytes_per_sec": 0, 00:22:28.547 "w_mbytes_per_sec": 0 00:22:28.547 }, 00:22:28.547 "claimed": true, 00:22:28.547 "claim_type": "exclusive_write", 00:22:28.547 "zoned": false, 00:22:28.547 "supported_io_types": { 00:22:28.547 "read": true, 00:22:28.547 "write": true, 00:22:28.547 "unmap": true, 00:22:28.547 "write_zeroes": true, 00:22:28.547 "flush": true, 00:22:28.547 "reset": true, 00:22:28.547 "compare": false, 00:22:28.547 "compare_and_write": false, 00:22:28.547 "abort": true, 00:22:28.547 "nvme_admin": false, 00:22:28.547 "nvme_io": false 00:22:28.547 }, 00:22:28.547 "memory_domains": [ 00:22:28.547 { 00:22:28.547 "dma_device_id": "system", 00:22:28.547 "dma_device_type": 1 00:22:28.547 }, 00:22:28.547 { 00:22:28.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.547 "dma_device_type": 2 00:22:28.547 } 00:22:28.547 ], 00:22:28.547 "driver_specific": {} 00:22:28.547 }' 00:22:28.547 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.547 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.547 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.547 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.547 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.806 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:29.066 [2024-06-10 19:06:43.697244] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:29.066 [2024-06-10 19:06:43.697268] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.066 [2024-06-10 19:06:43.697316] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.066 [2024-06-10 19:06:43.697560] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.066 [2024-06-10 19:06:43.697571] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe708b0 name Existed_Raid, state offline 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1726329 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 1726329 ']' 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 1726329 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1726329 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1726329' 00:22:29.066 killing process with pid 1726329 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 1726329 00:22:29.066 [2024-06-10 19:06:43.773400] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:29.066 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 1726329 00:22:29.066 [2024-06-10 19:06:43.804454] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:29.325 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:29.325 00:22:29.325 real 0m29.681s 00:22:29.325 user 0m54.522s 00:22:29.325 sys 0m5.268s 00:22:29.325 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:29.325 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.325 ************************************ 00:22:29.325 END TEST raid_state_function_test 00:22:29.325 ************************************ 00:22:29.326 19:06:44 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:29.326 19:06:44 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:22:29.326 19:06:44 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:29.326 19:06:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.326 ************************************ 00:22:29.326 START TEST raid_state_function_test_sb 00:22:29.326 ************************************ 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 4 true 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:29.326 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1732069 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1732069' 00:22:29.586 Process raid pid: 1732069 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1732069 /var/tmp/spdk-raid.sock 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1732069 ']' 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:29.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.586 19:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:29.586 [2024-06-10 19:06:44.134289] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:22:29.586 [2024-06-10 19:06:44.134344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.0 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.1 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.2 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.3 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.4 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.5 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.6 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:01.7 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.0 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.1 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.2 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.3 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.4 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.5 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.6 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b6:02.7 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.0 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.1 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.2 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.3 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.4 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.5 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.6 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:01.7 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.0 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.1 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.2 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.3 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.4 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.5 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.6 cannot be used 00:22:29.586 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:29.586 EAL: Requested device 0000:b8:02.7 cannot be used 00:22:29.586 [2024-06-10 19:06:44.268980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.846 [2024-06-10 19:06:44.355687] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.846 [2024-06-10 19:06:44.413663] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.846 [2024-06-10 19:06:44.413698] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.414 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:30.415 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:22:30.415 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:30.674 [2024-06-10 19:06:45.232690] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:30.674 [2024-06-10 19:06:45.232729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:30.674 [2024-06-10 19:06:45.232739] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:30.674 [2024-06-10 19:06:45.232751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:30.674 [2024-06-10 19:06:45.232759] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:30.674 [2024-06-10 19:06:45.232770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:30.674 [2024-06-10 19:06:45.232778] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:30.674 [2024-06-10 19:06:45.232788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.674 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.933 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.933 "name": "Existed_Raid", 00:22:30.933 "uuid": "ead18a45-0508-47e1-a155-1b9ede03d673", 00:22:30.933 "strip_size_kb": 0, 00:22:30.933 "state": "configuring", 00:22:30.933 "raid_level": "raid1", 00:22:30.933 "superblock": true, 00:22:30.933 "num_base_bdevs": 4, 00:22:30.933 "num_base_bdevs_discovered": 0, 00:22:30.933 "num_base_bdevs_operational": 4, 00:22:30.934 "base_bdevs_list": [ 00:22:30.934 { 00:22:30.934 "name": "BaseBdev1", 00:22:30.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.934 "is_configured": false, 00:22:30.934 "data_offset": 0, 00:22:30.934 "data_size": 0 00:22:30.934 }, 00:22:30.934 { 00:22:30.934 "name": "BaseBdev2", 00:22:30.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.934 "is_configured": false, 00:22:30.934 "data_offset": 0, 00:22:30.934 "data_size": 0 00:22:30.934 }, 00:22:30.934 { 00:22:30.934 "name": "BaseBdev3", 00:22:30.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.934 "is_configured": false, 00:22:30.934 "data_offset": 0, 00:22:30.934 "data_size": 0 00:22:30.934 }, 00:22:30.934 { 00:22:30.934 "name": "BaseBdev4", 00:22:30.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.934 "is_configured": false, 00:22:30.934 "data_offset": 0, 00:22:30.934 "data_size": 0 00:22:30.934 } 00:22:30.934 ] 00:22:30.934 }' 00:22:30.934 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.934 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.503 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:31.503 [2024-06-10 19:06:46.247222] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:31.503 [2024-06-10 19:06:46.247248] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25e4f50 name Existed_Raid, state configuring 00:22:31.762 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:31.762 [2024-06-10 19:06:46.475836] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:31.762 [2024-06-10 19:06:46.475859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:31.762 [2024-06-10 19:06:46.475868] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:31.762 [2024-06-10 19:06:46.475879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:31.762 [2024-06-10 19:06:46.475887] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:31.762 [2024-06-10 19:06:46.475898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:31.762 [2024-06-10 19:06:46.475906] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:31.762 [2024-06-10 19:06:46.475916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:31.762 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:32.021 [2024-06-10 19:06:46.713911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.021 BaseBdev1 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:32.021 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:32.280 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:32.540 [ 00:22:32.540 { 00:22:32.540 "name": "BaseBdev1", 00:22:32.540 "aliases": [ 00:22:32.540 "27351a95-5585-4a0c-8d89-86285e48a818" 00:22:32.540 ], 00:22:32.540 "product_name": "Malloc disk", 00:22:32.540 "block_size": 512, 00:22:32.540 "num_blocks": 65536, 00:22:32.540 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:32.540 "assigned_rate_limits": { 00:22:32.540 "rw_ios_per_sec": 0, 00:22:32.540 "rw_mbytes_per_sec": 0, 00:22:32.540 "r_mbytes_per_sec": 0, 00:22:32.540 "w_mbytes_per_sec": 0 00:22:32.540 }, 00:22:32.540 "claimed": true, 00:22:32.540 "claim_type": "exclusive_write", 00:22:32.540 "zoned": false, 00:22:32.540 "supported_io_types": { 00:22:32.540 "read": true, 00:22:32.540 "write": true, 00:22:32.540 "unmap": true, 00:22:32.540 "write_zeroes": true, 00:22:32.540 "flush": true, 00:22:32.540 "reset": true, 00:22:32.540 "compare": false, 00:22:32.540 "compare_and_write": false, 00:22:32.540 "abort": true, 00:22:32.540 "nvme_admin": false, 00:22:32.540 "nvme_io": false 00:22:32.540 }, 00:22:32.540 "memory_domains": [ 00:22:32.540 { 00:22:32.540 "dma_device_id": "system", 00:22:32.540 "dma_device_type": 1 00:22:32.540 }, 00:22:32.540 { 00:22:32.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.540 "dma_device_type": 2 00:22:32.540 } 00:22:32.540 ], 00:22:32.540 "driver_specific": {} 00:22:32.540 } 00:22:32.540 ] 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.540 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.799 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.799 "name": "Existed_Raid", 00:22:32.799 "uuid": "37185c28-834c-4e65-8276-16dc1b318b54", 00:22:32.799 "strip_size_kb": 0, 00:22:32.799 "state": "configuring", 00:22:32.799 "raid_level": "raid1", 00:22:32.799 "superblock": true, 00:22:32.799 "num_base_bdevs": 4, 00:22:32.799 "num_base_bdevs_discovered": 1, 00:22:32.799 "num_base_bdevs_operational": 4, 00:22:32.799 "base_bdevs_list": [ 00:22:32.799 { 00:22:32.799 "name": "BaseBdev1", 00:22:32.799 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:32.799 "is_configured": true, 00:22:32.799 "data_offset": 2048, 00:22:32.799 "data_size": 63488 00:22:32.799 }, 00:22:32.799 { 00:22:32.799 "name": "BaseBdev2", 00:22:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.799 "is_configured": false, 00:22:32.799 "data_offset": 0, 00:22:32.799 "data_size": 0 00:22:32.799 }, 00:22:32.799 { 00:22:32.799 "name": "BaseBdev3", 00:22:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.799 "is_configured": false, 00:22:32.799 "data_offset": 0, 00:22:32.799 "data_size": 0 00:22:32.799 }, 00:22:32.799 { 00:22:32.799 "name": "BaseBdev4", 00:22:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.800 "is_configured": false, 00:22:32.800 "data_offset": 0, 00:22:32.800 "data_size": 0 00:22:32.800 } 00:22:32.800 ] 00:22:32.800 }' 00:22:32.800 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.800 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.367 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:33.627 [2024-06-10 19:06:48.129650] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:33.627 [2024-06-10 19:06:48.129688] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25e47c0 name Existed_Raid, state configuring 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:33.627 [2024-06-10 19:06:48.358287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:33.627 [2024-06-10 19:06:48.359656] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:33.627 [2024-06-10 19:06:48.359687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:33.627 [2024-06-10 19:06:48.359696] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:33.627 [2024-06-10 19:06:48.359708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:33.627 [2024-06-10 19:06:48.359716] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:33.627 [2024-06-10 19:06:48.359727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.627 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.887 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.887 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.887 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.887 "name": "Existed_Raid", 00:22:33.887 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:33.887 "strip_size_kb": 0, 00:22:33.887 "state": "configuring", 00:22:33.887 "raid_level": "raid1", 00:22:33.887 "superblock": true, 00:22:33.887 "num_base_bdevs": 4, 00:22:33.887 "num_base_bdevs_discovered": 1, 00:22:33.887 "num_base_bdevs_operational": 4, 00:22:33.887 "base_bdevs_list": [ 00:22:33.887 { 00:22:33.887 "name": "BaseBdev1", 00:22:33.887 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:33.887 "is_configured": true, 00:22:33.887 "data_offset": 2048, 00:22:33.887 "data_size": 63488 00:22:33.887 }, 00:22:33.887 { 00:22:33.887 "name": "BaseBdev2", 00:22:33.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.887 "is_configured": false, 00:22:33.887 "data_offset": 0, 00:22:33.887 "data_size": 0 00:22:33.887 }, 00:22:33.887 { 00:22:33.887 "name": "BaseBdev3", 00:22:33.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.887 "is_configured": false, 00:22:33.887 "data_offset": 0, 00:22:33.887 "data_size": 0 00:22:33.887 }, 00:22:33.887 { 00:22:33.887 "name": "BaseBdev4", 00:22:33.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.887 "is_configured": false, 00:22:33.887 "data_offset": 0, 00:22:33.887 "data_size": 0 00:22:33.887 } 00:22:33.887 ] 00:22:33.887 }' 00:22:33.887 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.887 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.456 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:34.716 [2024-06-10 19:06:49.396135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.716 BaseBdev2 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:34.716 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:34.975 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:35.235 [ 00:22:35.235 { 00:22:35.235 "name": "BaseBdev2", 00:22:35.235 "aliases": [ 00:22:35.235 "266e0876-fdd6-4acb-aa5b-3cf950843dfa" 00:22:35.235 ], 00:22:35.235 "product_name": "Malloc disk", 00:22:35.235 "block_size": 512, 00:22:35.235 "num_blocks": 65536, 00:22:35.235 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:35.235 "assigned_rate_limits": { 00:22:35.235 "rw_ios_per_sec": 0, 00:22:35.235 "rw_mbytes_per_sec": 0, 00:22:35.235 "r_mbytes_per_sec": 0, 00:22:35.235 "w_mbytes_per_sec": 0 00:22:35.235 }, 00:22:35.235 "claimed": true, 00:22:35.235 "claim_type": "exclusive_write", 00:22:35.235 "zoned": false, 00:22:35.235 "supported_io_types": { 00:22:35.235 "read": true, 00:22:35.235 "write": true, 00:22:35.235 "unmap": true, 00:22:35.235 "write_zeroes": true, 00:22:35.235 "flush": true, 00:22:35.235 "reset": true, 00:22:35.235 "compare": false, 00:22:35.235 "compare_and_write": false, 00:22:35.235 "abort": true, 00:22:35.235 "nvme_admin": false, 00:22:35.235 "nvme_io": false 00:22:35.235 }, 00:22:35.235 "memory_domains": [ 00:22:35.235 { 00:22:35.235 "dma_device_id": "system", 00:22:35.235 "dma_device_type": 1 00:22:35.235 }, 00:22:35.235 { 00:22:35.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.235 "dma_device_type": 2 00:22:35.235 } 00:22:35.235 ], 00:22:35.235 "driver_specific": {} 00:22:35.235 } 00:22:35.235 ] 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.235 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.494 19:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.494 "name": "Existed_Raid", 00:22:35.494 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:35.494 "strip_size_kb": 0, 00:22:35.494 "state": "configuring", 00:22:35.495 "raid_level": "raid1", 00:22:35.495 "superblock": true, 00:22:35.495 "num_base_bdevs": 4, 00:22:35.495 "num_base_bdevs_discovered": 2, 00:22:35.495 "num_base_bdevs_operational": 4, 00:22:35.495 "base_bdevs_list": [ 00:22:35.495 { 00:22:35.495 "name": "BaseBdev1", 00:22:35.495 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:35.495 "is_configured": true, 00:22:35.495 "data_offset": 2048, 00:22:35.495 "data_size": 63488 00:22:35.495 }, 00:22:35.495 { 00:22:35.495 "name": "BaseBdev2", 00:22:35.495 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:35.495 "is_configured": true, 00:22:35.495 "data_offset": 2048, 00:22:35.495 "data_size": 63488 00:22:35.495 }, 00:22:35.495 { 00:22:35.495 "name": "BaseBdev3", 00:22:35.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.495 "is_configured": false, 00:22:35.495 "data_offset": 0, 00:22:35.495 "data_size": 0 00:22:35.495 }, 00:22:35.495 { 00:22:35.495 "name": "BaseBdev4", 00:22:35.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.495 "is_configured": false, 00:22:35.495 "data_offset": 0, 00:22:35.495 "data_size": 0 00:22:35.495 } 00:22:35.495 ] 00:22:35.495 }' 00:22:35.495 19:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.495 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.064 19:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:36.323 [2024-06-10 19:06:50.863088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.323 BaseBdev3 00:22:36.323 19:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:36.323 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:22:36.323 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:36.323 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:36.323 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:36.323 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:36.324 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:36.583 19:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:36.583 [ 00:22:36.583 { 00:22:36.583 "name": "BaseBdev3", 00:22:36.583 "aliases": [ 00:22:36.583 "0117e5e6-20af-4f22-bb9b-63aa8192da76" 00:22:36.583 ], 00:22:36.583 "product_name": "Malloc disk", 00:22:36.583 "block_size": 512, 00:22:36.583 "num_blocks": 65536, 00:22:36.583 "uuid": "0117e5e6-20af-4f22-bb9b-63aa8192da76", 00:22:36.583 "assigned_rate_limits": { 00:22:36.583 "rw_ios_per_sec": 0, 00:22:36.583 "rw_mbytes_per_sec": 0, 00:22:36.583 "r_mbytes_per_sec": 0, 00:22:36.583 "w_mbytes_per_sec": 0 00:22:36.583 }, 00:22:36.583 "claimed": true, 00:22:36.583 "claim_type": "exclusive_write", 00:22:36.583 "zoned": false, 00:22:36.583 "supported_io_types": { 00:22:36.583 "read": true, 00:22:36.583 "write": true, 00:22:36.583 "unmap": true, 00:22:36.583 "write_zeroes": true, 00:22:36.583 "flush": true, 00:22:36.583 "reset": true, 00:22:36.583 "compare": false, 00:22:36.583 "compare_and_write": false, 00:22:36.583 "abort": true, 00:22:36.583 "nvme_admin": false, 00:22:36.583 "nvme_io": false 00:22:36.583 }, 00:22:36.583 "memory_domains": [ 00:22:36.583 { 00:22:36.583 "dma_device_id": "system", 00:22:36.583 "dma_device_type": 1 00:22:36.583 }, 00:22:36.583 { 00:22:36.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.583 "dma_device_type": 2 00:22:36.583 } 00:22:36.583 ], 00:22:36.583 "driver_specific": {} 00:22:36.583 } 00:22:36.583 ] 00:22:36.583 19:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:36.583 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:36.583 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:36.583 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:36.583 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.842 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.842 "name": "Existed_Raid", 00:22:36.842 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:36.842 "strip_size_kb": 0, 00:22:36.842 "state": "configuring", 00:22:36.842 "raid_level": "raid1", 00:22:36.842 "superblock": true, 00:22:36.842 "num_base_bdevs": 4, 00:22:36.842 "num_base_bdevs_discovered": 3, 00:22:36.842 "num_base_bdevs_operational": 4, 00:22:36.842 "base_bdevs_list": [ 00:22:36.842 { 00:22:36.842 "name": "BaseBdev1", 00:22:36.842 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:36.843 "is_configured": true, 00:22:36.843 "data_offset": 2048, 00:22:36.843 "data_size": 63488 00:22:36.843 }, 00:22:36.843 { 00:22:36.843 "name": "BaseBdev2", 00:22:36.843 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:36.843 "is_configured": true, 00:22:36.843 "data_offset": 2048, 00:22:36.843 "data_size": 63488 00:22:36.843 }, 00:22:36.843 { 00:22:36.843 "name": "BaseBdev3", 00:22:36.843 "uuid": "0117e5e6-20af-4f22-bb9b-63aa8192da76", 00:22:36.843 "is_configured": true, 00:22:36.843 "data_offset": 2048, 00:22:36.843 "data_size": 63488 00:22:36.843 }, 00:22:36.843 { 00:22:36.843 "name": "BaseBdev4", 00:22:36.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.843 "is_configured": false, 00:22:36.843 "data_offset": 0, 00:22:36.843 "data_size": 0 00:22:36.843 } 00:22:36.843 ] 00:22:36.843 }' 00:22:36.843 19:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.843 19:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.411 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:37.670 [2024-06-10 19:06:52.346091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:37.670 [2024-06-10 19:06:52.346243] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25e5820 00:22:37.670 [2024-06-10 19:06:52.346256] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:37.670 [2024-06-10 19:06:52.346415] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25e6470 00:22:37.670 [2024-06-10 19:06:52.346532] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25e5820 00:22:37.670 [2024-06-10 19:06:52.346542] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25e5820 00:22:37.670 [2024-06-10 19:06:52.346635] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.670 BaseBdev4 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:37.670 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:37.929 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:38.189 [ 00:22:38.189 { 00:22:38.189 "name": "BaseBdev4", 00:22:38.189 "aliases": [ 00:22:38.189 "7a60950a-cf70-4a5f-ac52-d72eb6da8f99" 00:22:38.189 ], 00:22:38.189 "product_name": "Malloc disk", 00:22:38.189 "block_size": 512, 00:22:38.189 "num_blocks": 65536, 00:22:38.189 "uuid": "7a60950a-cf70-4a5f-ac52-d72eb6da8f99", 00:22:38.189 "assigned_rate_limits": { 00:22:38.189 "rw_ios_per_sec": 0, 00:22:38.189 "rw_mbytes_per_sec": 0, 00:22:38.189 "r_mbytes_per_sec": 0, 00:22:38.189 "w_mbytes_per_sec": 0 00:22:38.189 }, 00:22:38.189 "claimed": true, 00:22:38.189 "claim_type": "exclusive_write", 00:22:38.189 "zoned": false, 00:22:38.189 "supported_io_types": { 00:22:38.189 "read": true, 00:22:38.189 "write": true, 00:22:38.189 "unmap": true, 00:22:38.189 "write_zeroes": true, 00:22:38.189 "flush": true, 00:22:38.189 "reset": true, 00:22:38.189 "compare": false, 00:22:38.189 "compare_and_write": false, 00:22:38.189 "abort": true, 00:22:38.189 "nvme_admin": false, 00:22:38.189 "nvme_io": false 00:22:38.189 }, 00:22:38.189 "memory_domains": [ 00:22:38.189 { 00:22:38.189 "dma_device_id": "system", 00:22:38.189 "dma_device_type": 1 00:22:38.189 }, 00:22:38.189 { 00:22:38.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.189 "dma_device_type": 2 00:22:38.189 } 00:22:38.189 ], 00:22:38.189 "driver_specific": {} 00:22:38.189 } 00:22:38.189 ] 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.189 19:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.448 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.448 "name": "Existed_Raid", 00:22:38.448 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:38.448 "strip_size_kb": 0, 00:22:38.448 "state": "online", 00:22:38.448 "raid_level": "raid1", 00:22:38.448 "superblock": true, 00:22:38.448 "num_base_bdevs": 4, 00:22:38.448 "num_base_bdevs_discovered": 4, 00:22:38.448 "num_base_bdevs_operational": 4, 00:22:38.448 "base_bdevs_list": [ 00:22:38.448 { 00:22:38.449 "name": "BaseBdev1", 00:22:38.449 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:38.449 "is_configured": true, 00:22:38.449 "data_offset": 2048, 00:22:38.449 "data_size": 63488 00:22:38.449 }, 00:22:38.449 { 00:22:38.449 "name": "BaseBdev2", 00:22:38.449 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:38.449 "is_configured": true, 00:22:38.449 "data_offset": 2048, 00:22:38.449 "data_size": 63488 00:22:38.449 }, 00:22:38.449 { 00:22:38.449 "name": "BaseBdev3", 00:22:38.449 "uuid": "0117e5e6-20af-4f22-bb9b-63aa8192da76", 00:22:38.449 "is_configured": true, 00:22:38.449 "data_offset": 2048, 00:22:38.449 "data_size": 63488 00:22:38.449 }, 00:22:38.449 { 00:22:38.449 "name": "BaseBdev4", 00:22:38.449 "uuid": "7a60950a-cf70-4a5f-ac52-d72eb6da8f99", 00:22:38.449 "is_configured": true, 00:22:38.449 "data_offset": 2048, 00:22:38.449 "data_size": 63488 00:22:38.449 } 00:22:38.449 ] 00:22:38.449 }' 00:22:38.449 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.449 19:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:39.017 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:39.276 [2024-06-10 19:06:53.834302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.276 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:39.276 "name": "Existed_Raid", 00:22:39.276 "aliases": [ 00:22:39.276 "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af" 00:22:39.276 ], 00:22:39.276 "product_name": "Raid Volume", 00:22:39.276 "block_size": 512, 00:22:39.276 "num_blocks": 63488, 00:22:39.276 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:39.276 "assigned_rate_limits": { 00:22:39.276 "rw_ios_per_sec": 0, 00:22:39.276 "rw_mbytes_per_sec": 0, 00:22:39.276 "r_mbytes_per_sec": 0, 00:22:39.276 "w_mbytes_per_sec": 0 00:22:39.276 }, 00:22:39.276 "claimed": false, 00:22:39.276 "zoned": false, 00:22:39.276 "supported_io_types": { 00:22:39.276 "read": true, 00:22:39.276 "write": true, 00:22:39.276 "unmap": false, 00:22:39.277 "write_zeroes": true, 00:22:39.277 "flush": false, 00:22:39.277 "reset": true, 00:22:39.277 "compare": false, 00:22:39.277 "compare_and_write": false, 00:22:39.277 "abort": false, 00:22:39.277 "nvme_admin": false, 00:22:39.277 "nvme_io": false 00:22:39.277 }, 00:22:39.277 "memory_domains": [ 00:22:39.277 { 00:22:39.277 "dma_device_id": "system", 00:22:39.277 "dma_device_type": 1 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.277 "dma_device_type": 2 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "system", 00:22:39.277 "dma_device_type": 1 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.277 "dma_device_type": 2 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "system", 00:22:39.277 "dma_device_type": 1 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.277 "dma_device_type": 2 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "system", 00:22:39.277 "dma_device_type": 1 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.277 "dma_device_type": 2 00:22:39.277 } 00:22:39.277 ], 00:22:39.277 "driver_specific": { 00:22:39.277 "raid": { 00:22:39.277 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:39.277 "strip_size_kb": 0, 00:22:39.277 "state": "online", 00:22:39.277 "raid_level": "raid1", 00:22:39.277 "superblock": true, 00:22:39.277 "num_base_bdevs": 4, 00:22:39.277 "num_base_bdevs_discovered": 4, 00:22:39.277 "num_base_bdevs_operational": 4, 00:22:39.277 "base_bdevs_list": [ 00:22:39.277 { 00:22:39.277 "name": "BaseBdev1", 00:22:39.277 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:39.277 "is_configured": true, 00:22:39.277 "data_offset": 2048, 00:22:39.277 "data_size": 63488 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "name": "BaseBdev2", 00:22:39.277 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:39.277 "is_configured": true, 00:22:39.277 "data_offset": 2048, 00:22:39.277 "data_size": 63488 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "name": "BaseBdev3", 00:22:39.277 "uuid": "0117e5e6-20af-4f22-bb9b-63aa8192da76", 00:22:39.277 "is_configured": true, 00:22:39.277 "data_offset": 2048, 00:22:39.277 "data_size": 63488 00:22:39.277 }, 00:22:39.277 { 00:22:39.277 "name": "BaseBdev4", 00:22:39.277 "uuid": "7a60950a-cf70-4a5f-ac52-d72eb6da8f99", 00:22:39.277 "is_configured": true, 00:22:39.277 "data_offset": 2048, 00:22:39.277 "data_size": 63488 00:22:39.277 } 00:22:39.277 ] 00:22:39.277 } 00:22:39.277 } 00:22:39.277 }' 00:22:39.277 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:39.277 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:39.277 BaseBdev2 00:22:39.277 BaseBdev3 00:22:39.277 BaseBdev4' 00:22:39.277 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:39.277 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:39.277 19:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:39.536 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:39.536 "name": "BaseBdev1", 00:22:39.536 "aliases": [ 00:22:39.536 "27351a95-5585-4a0c-8d89-86285e48a818" 00:22:39.536 ], 00:22:39.536 "product_name": "Malloc disk", 00:22:39.536 "block_size": 512, 00:22:39.536 "num_blocks": 65536, 00:22:39.536 "uuid": "27351a95-5585-4a0c-8d89-86285e48a818", 00:22:39.536 "assigned_rate_limits": { 00:22:39.536 "rw_ios_per_sec": 0, 00:22:39.536 "rw_mbytes_per_sec": 0, 00:22:39.536 "r_mbytes_per_sec": 0, 00:22:39.536 "w_mbytes_per_sec": 0 00:22:39.536 }, 00:22:39.536 "claimed": true, 00:22:39.536 "claim_type": "exclusive_write", 00:22:39.536 "zoned": false, 00:22:39.536 "supported_io_types": { 00:22:39.536 "read": true, 00:22:39.536 "write": true, 00:22:39.536 "unmap": true, 00:22:39.536 "write_zeroes": true, 00:22:39.536 "flush": true, 00:22:39.536 "reset": true, 00:22:39.536 "compare": false, 00:22:39.536 "compare_and_write": false, 00:22:39.536 "abort": true, 00:22:39.536 "nvme_admin": false, 00:22:39.536 "nvme_io": false 00:22:39.536 }, 00:22:39.536 "memory_domains": [ 00:22:39.536 { 00:22:39.536 "dma_device_id": "system", 00:22:39.536 "dma_device_type": 1 00:22:39.536 }, 00:22:39.536 { 00:22:39.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.536 "dma_device_type": 2 00:22:39.536 } 00:22:39.536 ], 00:22:39.537 "driver_specific": {} 00:22:39.537 }' 00:22:39.537 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.537 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.537 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:39.537 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.537 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:39.796 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:40.056 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:40.056 "name": "BaseBdev2", 00:22:40.056 "aliases": [ 00:22:40.056 "266e0876-fdd6-4acb-aa5b-3cf950843dfa" 00:22:40.056 ], 00:22:40.056 "product_name": "Malloc disk", 00:22:40.056 "block_size": 512, 00:22:40.056 "num_blocks": 65536, 00:22:40.056 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:40.056 "assigned_rate_limits": { 00:22:40.056 "rw_ios_per_sec": 0, 00:22:40.056 "rw_mbytes_per_sec": 0, 00:22:40.056 "r_mbytes_per_sec": 0, 00:22:40.056 "w_mbytes_per_sec": 0 00:22:40.056 }, 00:22:40.056 "claimed": true, 00:22:40.056 "claim_type": "exclusive_write", 00:22:40.056 "zoned": false, 00:22:40.056 "supported_io_types": { 00:22:40.056 "read": true, 00:22:40.056 "write": true, 00:22:40.056 "unmap": true, 00:22:40.056 "write_zeroes": true, 00:22:40.056 "flush": true, 00:22:40.056 "reset": true, 00:22:40.056 "compare": false, 00:22:40.056 "compare_and_write": false, 00:22:40.056 "abort": true, 00:22:40.056 "nvme_admin": false, 00:22:40.056 "nvme_io": false 00:22:40.056 }, 00:22:40.056 "memory_domains": [ 00:22:40.056 { 00:22:40.056 "dma_device_id": "system", 00:22:40.056 "dma_device_type": 1 00:22:40.056 }, 00:22:40.056 { 00:22:40.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.056 "dma_device_type": 2 00:22:40.056 } 00:22:40.056 ], 00:22:40.056 "driver_specific": {} 00:22:40.056 }' 00:22:40.056 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.056 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.056 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:40.056 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.315 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.315 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:40.315 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.315 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.315 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:40.315 19:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.315 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.315 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:40.315 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:40.315 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:40.315 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:40.575 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:40.575 "name": "BaseBdev3", 00:22:40.575 "aliases": [ 00:22:40.575 "0117e5e6-20af-4f22-bb9b-63aa8192da76" 00:22:40.575 ], 00:22:40.575 "product_name": "Malloc disk", 00:22:40.575 "block_size": 512, 00:22:40.575 "num_blocks": 65536, 00:22:40.575 "uuid": "0117e5e6-20af-4f22-bb9b-63aa8192da76", 00:22:40.575 "assigned_rate_limits": { 00:22:40.575 "rw_ios_per_sec": 0, 00:22:40.575 "rw_mbytes_per_sec": 0, 00:22:40.575 "r_mbytes_per_sec": 0, 00:22:40.575 "w_mbytes_per_sec": 0 00:22:40.575 }, 00:22:40.575 "claimed": true, 00:22:40.575 "claim_type": "exclusive_write", 00:22:40.575 "zoned": false, 00:22:40.575 "supported_io_types": { 00:22:40.575 "read": true, 00:22:40.575 "write": true, 00:22:40.575 "unmap": true, 00:22:40.575 "write_zeroes": true, 00:22:40.575 "flush": true, 00:22:40.575 "reset": true, 00:22:40.575 "compare": false, 00:22:40.575 "compare_and_write": false, 00:22:40.575 "abort": true, 00:22:40.575 "nvme_admin": false, 00:22:40.575 "nvme_io": false 00:22:40.575 }, 00:22:40.575 "memory_domains": [ 00:22:40.575 { 00:22:40.575 "dma_device_id": "system", 00:22:40.575 "dma_device_type": 1 00:22:40.575 }, 00:22:40.575 { 00:22:40.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.575 "dma_device_type": 2 00:22:40.575 } 00:22:40.575 ], 00:22:40.575 "driver_specific": {} 00:22:40.575 }' 00:22:40.575 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.575 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:40.834 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:41.093 "name": "BaseBdev4", 00:22:41.093 "aliases": [ 00:22:41.093 "7a60950a-cf70-4a5f-ac52-d72eb6da8f99" 00:22:41.093 ], 00:22:41.093 "product_name": "Malloc disk", 00:22:41.093 "block_size": 512, 00:22:41.093 "num_blocks": 65536, 00:22:41.093 "uuid": "7a60950a-cf70-4a5f-ac52-d72eb6da8f99", 00:22:41.093 "assigned_rate_limits": { 00:22:41.093 "rw_ios_per_sec": 0, 00:22:41.093 "rw_mbytes_per_sec": 0, 00:22:41.093 "r_mbytes_per_sec": 0, 00:22:41.093 "w_mbytes_per_sec": 0 00:22:41.093 }, 00:22:41.093 "claimed": true, 00:22:41.093 "claim_type": "exclusive_write", 00:22:41.093 "zoned": false, 00:22:41.093 "supported_io_types": { 00:22:41.093 "read": true, 00:22:41.093 "write": true, 00:22:41.093 "unmap": true, 00:22:41.093 "write_zeroes": true, 00:22:41.093 "flush": true, 00:22:41.093 "reset": true, 00:22:41.093 "compare": false, 00:22:41.093 "compare_and_write": false, 00:22:41.093 "abort": true, 00:22:41.093 "nvme_admin": false, 00:22:41.093 "nvme_io": false 00:22:41.093 }, 00:22:41.093 "memory_domains": [ 00:22:41.093 { 00:22:41.093 "dma_device_id": "system", 00:22:41.093 "dma_device_type": 1 00:22:41.093 }, 00:22:41.093 { 00:22:41.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.093 "dma_device_type": 2 00:22:41.093 } 00:22:41.093 ], 00:22:41.093 "driver_specific": {} 00:22:41.093 }' 00:22:41.093 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.353 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.353 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:41.353 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.353 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.353 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:41.353 19:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.353 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.353 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:41.353 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.353 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.612 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:41.612 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:41.612 [2024-06-10 19:06:56.352742] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.872 "name": "Existed_Raid", 00:22:41.872 "uuid": "d6b9e078-ccf1-42c1-87a0-09a3a9ee34af", 00:22:41.872 "strip_size_kb": 0, 00:22:41.872 "state": "online", 00:22:41.872 "raid_level": "raid1", 00:22:41.872 "superblock": true, 00:22:41.872 "num_base_bdevs": 4, 00:22:41.872 "num_base_bdevs_discovered": 3, 00:22:41.872 "num_base_bdevs_operational": 3, 00:22:41.872 "base_bdevs_list": [ 00:22:41.872 { 00:22:41.872 "name": null, 00:22:41.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.872 "is_configured": false, 00:22:41.872 "data_offset": 2048, 00:22:41.872 "data_size": 63488 00:22:41.872 }, 00:22:41.872 { 00:22:41.872 "name": "BaseBdev2", 00:22:41.872 "uuid": "266e0876-fdd6-4acb-aa5b-3cf950843dfa", 00:22:41.872 "is_configured": true, 00:22:41.872 "data_offset": 2048, 00:22:41.872 "data_size": 63488 00:22:41.872 }, 00:22:41.872 { 00:22:41.872 "name": "BaseBdev3", 00:22:41.872 "uuid": "0117e5e6-20af-4f22-bb9b-63aa8192da76", 00:22:41.872 "is_configured": true, 00:22:41.872 "data_offset": 2048, 00:22:41.872 "data_size": 63488 00:22:41.872 }, 00:22:41.872 { 00:22:41.872 "name": "BaseBdev4", 00:22:41.872 "uuid": "7a60950a-cf70-4a5f-ac52-d72eb6da8f99", 00:22:41.872 "is_configured": true, 00:22:41.872 "data_offset": 2048, 00:22:41.872 "data_size": 63488 00:22:41.872 } 00:22:41.872 ] 00:22:41.872 }' 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.872 19:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.440 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:42.440 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:42.440 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.440 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:42.700 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:42.700 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.700 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:42.959 [2024-06-10 19:06:57.585083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:42.959 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:42.959 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:42.959 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.959 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:43.219 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:43.219 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:43.219 19:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:43.478 [2024-06-10 19:06:58.052406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:43.478 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:43.478 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:43.478 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.478 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:43.738 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:43.738 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:43.738 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:43.997 [2024-06-10 19:06:58.515706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:43.997 [2024-06-10 19:06:58.515780] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.997 [2024-06-10 19:06:58.525935] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.997 [2024-06-10 19:06:58.525963] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.997 [2024-06-10 19:06:58.525973] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25e5820 name Existed_Raid, state offline 00:22:43.997 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:43.997 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:43.997 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.997 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:44.285 BaseBdev2 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:44.285 19:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.563 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:44.822 [ 00:22:44.822 { 00:22:44.822 "name": "BaseBdev2", 00:22:44.822 "aliases": [ 00:22:44.822 "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7" 00:22:44.822 ], 00:22:44.822 "product_name": "Malloc disk", 00:22:44.822 "block_size": 512, 00:22:44.822 "num_blocks": 65536, 00:22:44.822 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:44.822 "assigned_rate_limits": { 00:22:44.822 "rw_ios_per_sec": 0, 00:22:44.822 "rw_mbytes_per_sec": 0, 00:22:44.822 "r_mbytes_per_sec": 0, 00:22:44.822 "w_mbytes_per_sec": 0 00:22:44.822 }, 00:22:44.822 "claimed": false, 00:22:44.822 "zoned": false, 00:22:44.822 "supported_io_types": { 00:22:44.822 "read": true, 00:22:44.822 "write": true, 00:22:44.822 "unmap": true, 00:22:44.822 "write_zeroes": true, 00:22:44.822 "flush": true, 00:22:44.822 "reset": true, 00:22:44.822 "compare": false, 00:22:44.822 "compare_and_write": false, 00:22:44.822 "abort": true, 00:22:44.822 "nvme_admin": false, 00:22:44.822 "nvme_io": false 00:22:44.822 }, 00:22:44.822 "memory_domains": [ 00:22:44.822 { 00:22:44.822 "dma_device_id": "system", 00:22:44.822 "dma_device_type": 1 00:22:44.822 }, 00:22:44.822 { 00:22:44.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.822 "dma_device_type": 2 00:22:44.822 } 00:22:44.822 ], 00:22:44.822 "driver_specific": {} 00:22:44.822 } 00:22:44.822 ] 00:22:44.822 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:44.822 19:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:44.822 19:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:44.822 19:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:45.082 BaseBdev3 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:45.082 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:45.342 19:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:45.342 [ 00:22:45.342 { 00:22:45.342 "name": "BaseBdev3", 00:22:45.342 "aliases": [ 00:22:45.342 "76ada46e-4975-46e5-80b6-5d6a35b70a66" 00:22:45.342 ], 00:22:45.342 "product_name": "Malloc disk", 00:22:45.342 "block_size": 512, 00:22:45.342 "num_blocks": 65536, 00:22:45.342 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:45.342 "assigned_rate_limits": { 00:22:45.342 "rw_ios_per_sec": 0, 00:22:45.342 "rw_mbytes_per_sec": 0, 00:22:45.342 "r_mbytes_per_sec": 0, 00:22:45.342 "w_mbytes_per_sec": 0 00:22:45.342 }, 00:22:45.342 "claimed": false, 00:22:45.342 "zoned": false, 00:22:45.342 "supported_io_types": { 00:22:45.342 "read": true, 00:22:45.342 "write": true, 00:22:45.342 "unmap": true, 00:22:45.342 "write_zeroes": true, 00:22:45.342 "flush": true, 00:22:45.342 "reset": true, 00:22:45.342 "compare": false, 00:22:45.342 "compare_and_write": false, 00:22:45.342 "abort": true, 00:22:45.342 "nvme_admin": false, 00:22:45.342 "nvme_io": false 00:22:45.342 }, 00:22:45.342 "memory_domains": [ 00:22:45.342 { 00:22:45.342 "dma_device_id": "system", 00:22:45.342 "dma_device_type": 1 00:22:45.342 }, 00:22:45.342 { 00:22:45.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.342 "dma_device_type": 2 00:22:45.342 } 00:22:45.342 ], 00:22:45.342 "driver_specific": {} 00:22:45.342 } 00:22:45.342 ] 00:22:45.342 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:45.342 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:45.342 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:45.342 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:45.602 BaseBdev4 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:45.602 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:45.860 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:46.119 [ 00:22:46.119 { 00:22:46.119 "name": "BaseBdev4", 00:22:46.119 "aliases": [ 00:22:46.119 "ade321bb-4d90-4093-b79c-10808791d115" 00:22:46.119 ], 00:22:46.119 "product_name": "Malloc disk", 00:22:46.119 "block_size": 512, 00:22:46.119 "num_blocks": 65536, 00:22:46.119 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:46.119 "assigned_rate_limits": { 00:22:46.119 "rw_ios_per_sec": 0, 00:22:46.119 "rw_mbytes_per_sec": 0, 00:22:46.119 "r_mbytes_per_sec": 0, 00:22:46.119 "w_mbytes_per_sec": 0 00:22:46.119 }, 00:22:46.119 "claimed": false, 00:22:46.119 "zoned": false, 00:22:46.119 "supported_io_types": { 00:22:46.119 "read": true, 00:22:46.119 "write": true, 00:22:46.119 "unmap": true, 00:22:46.119 "write_zeroes": true, 00:22:46.119 "flush": true, 00:22:46.119 "reset": true, 00:22:46.119 "compare": false, 00:22:46.119 "compare_and_write": false, 00:22:46.119 "abort": true, 00:22:46.119 "nvme_admin": false, 00:22:46.119 "nvme_io": false 00:22:46.119 }, 00:22:46.119 "memory_domains": [ 00:22:46.119 { 00:22:46.119 "dma_device_id": "system", 00:22:46.119 "dma_device_type": 1 00:22:46.119 }, 00:22:46.119 { 00:22:46.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.119 "dma_device_type": 2 00:22:46.119 } 00:22:46.119 ], 00:22:46.119 "driver_specific": {} 00:22:46.119 } 00:22:46.119 ] 00:22:46.119 19:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:46.119 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:46.119 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:46.119 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:46.379 [2024-06-10 19:07:00.953004] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.379 [2024-06-10 19:07:00.953039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.379 [2024-06-10 19:07:00.953057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:46.379 [2024-06-10 19:07:00.954296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:46.379 [2024-06-10 19:07:00.954332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.379 19:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.638 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.638 "name": "Existed_Raid", 00:22:46.638 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:46.638 "strip_size_kb": 0, 00:22:46.638 "state": "configuring", 00:22:46.638 "raid_level": "raid1", 00:22:46.638 "superblock": true, 00:22:46.638 "num_base_bdevs": 4, 00:22:46.638 "num_base_bdevs_discovered": 3, 00:22:46.638 "num_base_bdevs_operational": 4, 00:22:46.638 "base_bdevs_list": [ 00:22:46.638 { 00:22:46.638 "name": "BaseBdev1", 00:22:46.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.638 "is_configured": false, 00:22:46.638 "data_offset": 0, 00:22:46.638 "data_size": 0 00:22:46.638 }, 00:22:46.638 { 00:22:46.638 "name": "BaseBdev2", 00:22:46.638 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:46.638 "is_configured": true, 00:22:46.638 "data_offset": 2048, 00:22:46.638 "data_size": 63488 00:22:46.638 }, 00:22:46.638 { 00:22:46.638 "name": "BaseBdev3", 00:22:46.638 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:46.638 "is_configured": true, 00:22:46.638 "data_offset": 2048, 00:22:46.638 "data_size": 63488 00:22:46.638 }, 00:22:46.638 { 00:22:46.638 "name": "BaseBdev4", 00:22:46.638 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:46.638 "is_configured": true, 00:22:46.638 "data_offset": 2048, 00:22:46.638 "data_size": 63488 00:22:46.638 } 00:22:46.638 ] 00:22:46.638 }' 00:22:46.638 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.639 19:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.207 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:47.207 [2024-06-10 19:07:01.951614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.466 19:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.466 19:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.466 "name": "Existed_Raid", 00:22:47.466 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:47.466 "strip_size_kb": 0, 00:22:47.466 "state": "configuring", 00:22:47.466 "raid_level": "raid1", 00:22:47.466 "superblock": true, 00:22:47.466 "num_base_bdevs": 4, 00:22:47.466 "num_base_bdevs_discovered": 2, 00:22:47.466 "num_base_bdevs_operational": 4, 00:22:47.466 "base_bdevs_list": [ 00:22:47.466 { 00:22:47.466 "name": "BaseBdev1", 00:22:47.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.466 "is_configured": false, 00:22:47.466 "data_offset": 0, 00:22:47.466 "data_size": 0 00:22:47.466 }, 00:22:47.466 { 00:22:47.466 "name": null, 00:22:47.466 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:47.466 "is_configured": false, 00:22:47.466 "data_offset": 2048, 00:22:47.466 "data_size": 63488 00:22:47.466 }, 00:22:47.466 { 00:22:47.466 "name": "BaseBdev3", 00:22:47.466 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:47.466 "is_configured": true, 00:22:47.466 "data_offset": 2048, 00:22:47.466 "data_size": 63488 00:22:47.466 }, 00:22:47.466 { 00:22:47.466 "name": "BaseBdev4", 00:22:47.466 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:47.466 "is_configured": true, 00:22:47.466 "data_offset": 2048, 00:22:47.466 "data_size": 63488 00:22:47.466 } 00:22:47.466 ] 00:22:47.466 }' 00:22:47.466 19:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.466 19:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.033 19:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.033 19:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:48.291 19:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:48.291 19:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:48.550 [2024-06-10 19:07:03.214225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.550 BaseBdev1 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:48.550 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:48.809 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:49.069 [ 00:22:49.069 { 00:22:49.069 "name": "BaseBdev1", 00:22:49.069 "aliases": [ 00:22:49.069 "8954c821-43a9-40b6-bf26-c7f718b24825" 00:22:49.069 ], 00:22:49.069 "product_name": "Malloc disk", 00:22:49.069 "block_size": 512, 00:22:49.069 "num_blocks": 65536, 00:22:49.069 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:49.069 "assigned_rate_limits": { 00:22:49.069 "rw_ios_per_sec": 0, 00:22:49.069 "rw_mbytes_per_sec": 0, 00:22:49.069 "r_mbytes_per_sec": 0, 00:22:49.069 "w_mbytes_per_sec": 0 00:22:49.069 }, 00:22:49.069 "claimed": true, 00:22:49.069 "claim_type": "exclusive_write", 00:22:49.069 "zoned": false, 00:22:49.069 "supported_io_types": { 00:22:49.069 "read": true, 00:22:49.069 "write": true, 00:22:49.069 "unmap": true, 00:22:49.069 "write_zeroes": true, 00:22:49.069 "flush": true, 00:22:49.069 "reset": true, 00:22:49.069 "compare": false, 00:22:49.069 "compare_and_write": false, 00:22:49.069 "abort": true, 00:22:49.069 "nvme_admin": false, 00:22:49.069 "nvme_io": false 00:22:49.069 }, 00:22:49.069 "memory_domains": [ 00:22:49.069 { 00:22:49.069 "dma_device_id": "system", 00:22:49.069 "dma_device_type": 1 00:22:49.069 }, 00:22:49.069 { 00:22:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.069 "dma_device_type": 2 00:22:49.069 } 00:22:49.069 ], 00:22:49.069 "driver_specific": {} 00:22:49.069 } 00:22:49.069 ] 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.069 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.329 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.329 "name": "Existed_Raid", 00:22:49.329 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:49.329 "strip_size_kb": 0, 00:22:49.329 "state": "configuring", 00:22:49.329 "raid_level": "raid1", 00:22:49.329 "superblock": true, 00:22:49.329 "num_base_bdevs": 4, 00:22:49.329 "num_base_bdevs_discovered": 3, 00:22:49.329 "num_base_bdevs_operational": 4, 00:22:49.329 "base_bdevs_list": [ 00:22:49.329 { 00:22:49.329 "name": "BaseBdev1", 00:22:49.329 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:49.329 "is_configured": true, 00:22:49.329 "data_offset": 2048, 00:22:49.329 "data_size": 63488 00:22:49.329 }, 00:22:49.329 { 00:22:49.329 "name": null, 00:22:49.329 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:49.329 "is_configured": false, 00:22:49.329 "data_offset": 2048, 00:22:49.329 "data_size": 63488 00:22:49.329 }, 00:22:49.329 { 00:22:49.329 "name": "BaseBdev3", 00:22:49.329 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:49.329 "is_configured": true, 00:22:49.329 "data_offset": 2048, 00:22:49.329 "data_size": 63488 00:22:49.329 }, 00:22:49.329 { 00:22:49.329 "name": "BaseBdev4", 00:22:49.329 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:49.329 "is_configured": true, 00:22:49.329 "data_offset": 2048, 00:22:49.329 "data_size": 63488 00:22:49.329 } 00:22:49.329 ] 00:22:49.329 }' 00:22:49.329 19:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.329 19:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.897 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.897 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:50.157 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:50.157 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:50.417 [2024-06-10 19:07:04.918748] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.417 19:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.417 19:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.417 "name": "Existed_Raid", 00:22:50.417 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:50.417 "strip_size_kb": 0, 00:22:50.417 "state": "configuring", 00:22:50.417 "raid_level": "raid1", 00:22:50.417 "superblock": true, 00:22:50.417 "num_base_bdevs": 4, 00:22:50.417 "num_base_bdevs_discovered": 2, 00:22:50.417 "num_base_bdevs_operational": 4, 00:22:50.417 "base_bdevs_list": [ 00:22:50.417 { 00:22:50.417 "name": "BaseBdev1", 00:22:50.417 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:50.417 "is_configured": true, 00:22:50.417 "data_offset": 2048, 00:22:50.417 "data_size": 63488 00:22:50.417 }, 00:22:50.417 { 00:22:50.417 "name": null, 00:22:50.417 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:50.417 "is_configured": false, 00:22:50.417 "data_offset": 2048, 00:22:50.417 "data_size": 63488 00:22:50.417 }, 00:22:50.417 { 00:22:50.417 "name": null, 00:22:50.417 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:50.417 "is_configured": false, 00:22:50.417 "data_offset": 2048, 00:22:50.417 "data_size": 63488 00:22:50.417 }, 00:22:50.417 { 00:22:50.417 "name": "BaseBdev4", 00:22:50.417 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:50.417 "is_configured": true, 00:22:50.417 "data_offset": 2048, 00:22:50.417 "data_size": 63488 00:22:50.417 } 00:22:50.417 ] 00:22:50.418 }' 00:22:50.418 19:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.418 19:07:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.986 19:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:50.986 19:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.246 19:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:51.246 19:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:51.505 [2024-06-10 19:07:06.113914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.505 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.765 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.765 "name": "Existed_Raid", 00:22:51.765 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:51.765 "strip_size_kb": 0, 00:22:51.765 "state": "configuring", 00:22:51.765 "raid_level": "raid1", 00:22:51.765 "superblock": true, 00:22:51.765 "num_base_bdevs": 4, 00:22:51.765 "num_base_bdevs_discovered": 3, 00:22:51.765 "num_base_bdevs_operational": 4, 00:22:51.765 "base_bdevs_list": [ 00:22:51.765 { 00:22:51.765 "name": "BaseBdev1", 00:22:51.765 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:51.765 "is_configured": true, 00:22:51.765 "data_offset": 2048, 00:22:51.765 "data_size": 63488 00:22:51.765 }, 00:22:51.765 { 00:22:51.765 "name": null, 00:22:51.765 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:51.765 "is_configured": false, 00:22:51.765 "data_offset": 2048, 00:22:51.765 "data_size": 63488 00:22:51.765 }, 00:22:51.765 { 00:22:51.765 "name": "BaseBdev3", 00:22:51.765 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:51.765 "is_configured": true, 00:22:51.765 "data_offset": 2048, 00:22:51.765 "data_size": 63488 00:22:51.765 }, 00:22:51.765 { 00:22:51.765 "name": "BaseBdev4", 00:22:51.765 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:51.765 "is_configured": true, 00:22:51.765 "data_offset": 2048, 00:22:51.765 "data_size": 63488 00:22:51.765 } 00:22:51.765 ] 00:22:51.765 }' 00:22:51.765 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.765 19:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.335 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.335 19:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:52.595 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:52.595 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:52.856 [2024-06-10 19:07:07.353199] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.856 "name": "Existed_Raid", 00:22:52.856 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:52.856 "strip_size_kb": 0, 00:22:52.856 "state": "configuring", 00:22:52.856 "raid_level": "raid1", 00:22:52.856 "superblock": true, 00:22:52.856 "num_base_bdevs": 4, 00:22:52.856 "num_base_bdevs_discovered": 2, 00:22:52.856 "num_base_bdevs_operational": 4, 00:22:52.856 "base_bdevs_list": [ 00:22:52.856 { 00:22:52.856 "name": null, 00:22:52.856 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:52.856 "is_configured": false, 00:22:52.856 "data_offset": 2048, 00:22:52.856 "data_size": 63488 00:22:52.856 }, 00:22:52.856 { 00:22:52.856 "name": null, 00:22:52.856 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:52.856 "is_configured": false, 00:22:52.856 "data_offset": 2048, 00:22:52.856 "data_size": 63488 00:22:52.856 }, 00:22:52.856 { 00:22:52.856 "name": "BaseBdev3", 00:22:52.856 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:52.856 "is_configured": true, 00:22:52.856 "data_offset": 2048, 00:22:52.856 "data_size": 63488 00:22:52.856 }, 00:22:52.856 { 00:22:52.856 "name": "BaseBdev4", 00:22:52.856 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:52.856 "is_configured": true, 00:22:52.856 "data_offset": 2048, 00:22:52.856 "data_size": 63488 00:22:52.856 } 00:22:52.856 ] 00:22:52.856 }' 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.856 19:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.795 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.795 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:53.795 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:53.795 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:54.055 [2024-06-10 19:07:08.626529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.055 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.315 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.315 "name": "Existed_Raid", 00:22:54.315 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:54.315 "strip_size_kb": 0, 00:22:54.315 "state": "configuring", 00:22:54.315 "raid_level": "raid1", 00:22:54.315 "superblock": true, 00:22:54.315 "num_base_bdevs": 4, 00:22:54.315 "num_base_bdevs_discovered": 3, 00:22:54.315 "num_base_bdevs_operational": 4, 00:22:54.315 "base_bdevs_list": [ 00:22:54.315 { 00:22:54.315 "name": null, 00:22:54.315 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:54.315 "is_configured": false, 00:22:54.315 "data_offset": 2048, 00:22:54.315 "data_size": 63488 00:22:54.315 }, 00:22:54.315 { 00:22:54.315 "name": "BaseBdev2", 00:22:54.315 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:54.315 "is_configured": true, 00:22:54.315 "data_offset": 2048, 00:22:54.315 "data_size": 63488 00:22:54.315 }, 00:22:54.315 { 00:22:54.315 "name": "BaseBdev3", 00:22:54.315 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:54.315 "is_configured": true, 00:22:54.315 "data_offset": 2048, 00:22:54.315 "data_size": 63488 00:22:54.315 }, 00:22:54.315 { 00:22:54.315 "name": "BaseBdev4", 00:22:54.315 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:54.315 "is_configured": true, 00:22:54.315 "data_offset": 2048, 00:22:54.315 "data_size": 63488 00:22:54.315 } 00:22:54.315 ] 00:22:54.315 }' 00:22:54.315 19:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.315 19:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.885 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.885 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:55.144 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:55.145 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.145 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:55.145 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8954c821-43a9-40b6-bf26-c7f718b24825 00:22:55.405 [2024-06-10 19:07:10.109517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:55.406 [2024-06-10 19:07:10.109668] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25ea440 00:22:55.406 [2024-06-10 19:07:10.109681] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:55.406 [2024-06-10 19:07:10.110068] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24f9f80 00:22:55.406 [2024-06-10 19:07:10.110257] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25ea440 00:22:55.406 [2024-06-10 19:07:10.110268] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25ea440 00:22:55.406 NewBaseBdev 00:22:55.406 [2024-06-10 19:07:10.110355] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:22:55.406 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:55.663 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:55.921 [ 00:22:55.921 { 00:22:55.921 "name": "NewBaseBdev", 00:22:55.921 "aliases": [ 00:22:55.921 "8954c821-43a9-40b6-bf26-c7f718b24825" 00:22:55.921 ], 00:22:55.921 "product_name": "Malloc disk", 00:22:55.921 "block_size": 512, 00:22:55.921 "num_blocks": 65536, 00:22:55.921 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:55.921 "assigned_rate_limits": { 00:22:55.921 "rw_ios_per_sec": 0, 00:22:55.921 "rw_mbytes_per_sec": 0, 00:22:55.921 "r_mbytes_per_sec": 0, 00:22:55.921 "w_mbytes_per_sec": 0 00:22:55.921 }, 00:22:55.921 "claimed": true, 00:22:55.921 "claim_type": "exclusive_write", 00:22:55.921 "zoned": false, 00:22:55.921 "supported_io_types": { 00:22:55.921 "read": true, 00:22:55.921 "write": true, 00:22:55.921 "unmap": true, 00:22:55.921 "write_zeroes": true, 00:22:55.921 "flush": true, 00:22:55.921 "reset": true, 00:22:55.921 "compare": false, 00:22:55.921 "compare_and_write": false, 00:22:55.921 "abort": true, 00:22:55.921 "nvme_admin": false, 00:22:55.921 "nvme_io": false 00:22:55.921 }, 00:22:55.921 "memory_domains": [ 00:22:55.921 { 00:22:55.921 "dma_device_id": "system", 00:22:55.921 "dma_device_type": 1 00:22:55.921 }, 00:22:55.921 { 00:22:55.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.921 "dma_device_type": 2 00:22:55.921 } 00:22:55.921 ], 00:22:55.921 "driver_specific": {} 00:22:55.921 } 00:22:55.921 ] 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.921 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.922 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.922 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.180 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.180 "name": "Existed_Raid", 00:22:56.180 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:56.180 "strip_size_kb": 0, 00:22:56.180 "state": "online", 00:22:56.180 "raid_level": "raid1", 00:22:56.180 "superblock": true, 00:22:56.180 "num_base_bdevs": 4, 00:22:56.180 "num_base_bdevs_discovered": 4, 00:22:56.180 "num_base_bdevs_operational": 4, 00:22:56.180 "base_bdevs_list": [ 00:22:56.180 { 00:22:56.180 "name": "NewBaseBdev", 00:22:56.180 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:56.180 "is_configured": true, 00:22:56.180 "data_offset": 2048, 00:22:56.180 "data_size": 63488 00:22:56.180 }, 00:22:56.180 { 00:22:56.180 "name": "BaseBdev2", 00:22:56.180 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:56.180 "is_configured": true, 00:22:56.180 "data_offset": 2048, 00:22:56.180 "data_size": 63488 00:22:56.180 }, 00:22:56.180 { 00:22:56.180 "name": "BaseBdev3", 00:22:56.180 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:56.180 "is_configured": true, 00:22:56.180 "data_offset": 2048, 00:22:56.180 "data_size": 63488 00:22:56.180 }, 00:22:56.180 { 00:22:56.180 "name": "BaseBdev4", 00:22:56.180 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:56.180 "is_configured": true, 00:22:56.180 "data_offset": 2048, 00:22:56.180 "data_size": 63488 00:22:56.180 } 00:22:56.180 ] 00:22:56.180 }' 00:22:56.180 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.180 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:56.747 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:57.007 [2024-06-10 19:07:11.585683] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.007 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:57.007 "name": "Existed_Raid", 00:22:57.007 "aliases": [ 00:22:57.007 "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d" 00:22:57.007 ], 00:22:57.007 "product_name": "Raid Volume", 00:22:57.007 "block_size": 512, 00:22:57.007 "num_blocks": 63488, 00:22:57.007 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:57.007 "assigned_rate_limits": { 00:22:57.007 "rw_ios_per_sec": 0, 00:22:57.007 "rw_mbytes_per_sec": 0, 00:22:57.007 "r_mbytes_per_sec": 0, 00:22:57.007 "w_mbytes_per_sec": 0 00:22:57.007 }, 00:22:57.007 "claimed": false, 00:22:57.007 "zoned": false, 00:22:57.007 "supported_io_types": { 00:22:57.007 "read": true, 00:22:57.007 "write": true, 00:22:57.007 "unmap": false, 00:22:57.007 "write_zeroes": true, 00:22:57.007 "flush": false, 00:22:57.007 "reset": true, 00:22:57.007 "compare": false, 00:22:57.007 "compare_and_write": false, 00:22:57.007 "abort": false, 00:22:57.007 "nvme_admin": false, 00:22:57.007 "nvme_io": false 00:22:57.007 }, 00:22:57.007 "memory_domains": [ 00:22:57.007 { 00:22:57.007 "dma_device_id": "system", 00:22:57.007 "dma_device_type": 1 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.007 "dma_device_type": 2 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "system", 00:22:57.007 "dma_device_type": 1 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.007 "dma_device_type": 2 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "system", 00:22:57.007 "dma_device_type": 1 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.007 "dma_device_type": 2 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "system", 00:22:57.007 "dma_device_type": 1 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.007 "dma_device_type": 2 00:22:57.007 } 00:22:57.007 ], 00:22:57.007 "driver_specific": { 00:22:57.007 "raid": { 00:22:57.007 "uuid": "ce7d1faf-a50a-452c-85a4-2aa0f9cddc3d", 00:22:57.007 "strip_size_kb": 0, 00:22:57.007 "state": "online", 00:22:57.007 "raid_level": "raid1", 00:22:57.007 "superblock": true, 00:22:57.007 "num_base_bdevs": 4, 00:22:57.007 "num_base_bdevs_discovered": 4, 00:22:57.007 "num_base_bdevs_operational": 4, 00:22:57.007 "base_bdevs_list": [ 00:22:57.007 { 00:22:57.007 "name": "NewBaseBdev", 00:22:57.007 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:57.007 "is_configured": true, 00:22:57.007 "data_offset": 2048, 00:22:57.007 "data_size": 63488 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "name": "BaseBdev2", 00:22:57.007 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:57.007 "is_configured": true, 00:22:57.007 "data_offset": 2048, 00:22:57.007 "data_size": 63488 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "name": "BaseBdev3", 00:22:57.007 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:57.007 "is_configured": true, 00:22:57.007 "data_offset": 2048, 00:22:57.007 "data_size": 63488 00:22:57.007 }, 00:22:57.007 { 00:22:57.007 "name": "BaseBdev4", 00:22:57.007 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:57.007 "is_configured": true, 00:22:57.007 "data_offset": 2048, 00:22:57.007 "data_size": 63488 00:22:57.007 } 00:22:57.007 ] 00:22:57.007 } 00:22:57.007 } 00:22:57.007 }' 00:22:57.007 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.007 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:57.007 BaseBdev2 00:22:57.007 BaseBdev3 00:22:57.007 BaseBdev4' 00:22:57.008 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.008 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:57.008 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.267 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.267 "name": "NewBaseBdev", 00:22:57.267 "aliases": [ 00:22:57.267 "8954c821-43a9-40b6-bf26-c7f718b24825" 00:22:57.267 ], 00:22:57.267 "product_name": "Malloc disk", 00:22:57.267 "block_size": 512, 00:22:57.267 "num_blocks": 65536, 00:22:57.267 "uuid": "8954c821-43a9-40b6-bf26-c7f718b24825", 00:22:57.267 "assigned_rate_limits": { 00:22:57.267 "rw_ios_per_sec": 0, 00:22:57.267 "rw_mbytes_per_sec": 0, 00:22:57.267 "r_mbytes_per_sec": 0, 00:22:57.267 "w_mbytes_per_sec": 0 00:22:57.267 }, 00:22:57.267 "claimed": true, 00:22:57.268 "claim_type": "exclusive_write", 00:22:57.268 "zoned": false, 00:22:57.268 "supported_io_types": { 00:22:57.268 "read": true, 00:22:57.268 "write": true, 00:22:57.268 "unmap": true, 00:22:57.268 "write_zeroes": true, 00:22:57.268 "flush": true, 00:22:57.268 "reset": true, 00:22:57.268 "compare": false, 00:22:57.268 "compare_and_write": false, 00:22:57.268 "abort": true, 00:22:57.268 "nvme_admin": false, 00:22:57.268 "nvme_io": false 00:22:57.268 }, 00:22:57.268 "memory_domains": [ 00:22:57.268 { 00:22:57.268 "dma_device_id": "system", 00:22:57.268 "dma_device_type": 1 00:22:57.268 }, 00:22:57.268 { 00:22:57.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.268 "dma_device_type": 2 00:22:57.268 } 00:22:57.268 ], 00:22:57.268 "driver_specific": {} 00:22:57.268 }' 00:22:57.268 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.268 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.268 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.268 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.268 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:57.538 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.805 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.805 "name": "BaseBdev2", 00:22:57.805 "aliases": [ 00:22:57.805 "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7" 00:22:57.805 ], 00:22:57.805 "product_name": "Malloc disk", 00:22:57.805 "block_size": 512, 00:22:57.805 "num_blocks": 65536, 00:22:57.805 "uuid": "3a356bd2-3aaa-4e82-a83a-36d0611e3cb7", 00:22:57.805 "assigned_rate_limits": { 00:22:57.805 "rw_ios_per_sec": 0, 00:22:57.805 "rw_mbytes_per_sec": 0, 00:22:57.805 "r_mbytes_per_sec": 0, 00:22:57.805 "w_mbytes_per_sec": 0 00:22:57.805 }, 00:22:57.805 "claimed": true, 00:22:57.805 "claim_type": "exclusive_write", 00:22:57.805 "zoned": false, 00:22:57.805 "supported_io_types": { 00:22:57.805 "read": true, 00:22:57.805 "write": true, 00:22:57.805 "unmap": true, 00:22:57.805 "write_zeroes": true, 00:22:57.805 "flush": true, 00:22:57.805 "reset": true, 00:22:57.805 "compare": false, 00:22:57.805 "compare_and_write": false, 00:22:57.805 "abort": true, 00:22:57.805 "nvme_admin": false, 00:22:57.805 "nvme_io": false 00:22:57.805 }, 00:22:57.805 "memory_domains": [ 00:22:57.805 { 00:22:57.805 "dma_device_id": "system", 00:22:57.805 "dma_device_type": 1 00:22:57.805 }, 00:22:57.805 { 00:22:57.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.805 "dma_device_type": 2 00:22:57.805 } 00:22:57.805 ], 00:22:57.805 "driver_specific": {} 00:22:57.805 }' 00:22:57.805 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.805 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.805 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.805 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.063 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:58.321 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.321 "name": "BaseBdev3", 00:22:58.321 "aliases": [ 00:22:58.321 "76ada46e-4975-46e5-80b6-5d6a35b70a66" 00:22:58.321 ], 00:22:58.321 "product_name": "Malloc disk", 00:22:58.321 "block_size": 512, 00:22:58.321 "num_blocks": 65536, 00:22:58.321 "uuid": "76ada46e-4975-46e5-80b6-5d6a35b70a66", 00:22:58.321 "assigned_rate_limits": { 00:22:58.321 "rw_ios_per_sec": 0, 00:22:58.321 "rw_mbytes_per_sec": 0, 00:22:58.321 "r_mbytes_per_sec": 0, 00:22:58.321 "w_mbytes_per_sec": 0 00:22:58.321 }, 00:22:58.321 "claimed": true, 00:22:58.321 "claim_type": "exclusive_write", 00:22:58.321 "zoned": false, 00:22:58.321 "supported_io_types": { 00:22:58.321 "read": true, 00:22:58.321 "write": true, 00:22:58.321 "unmap": true, 00:22:58.321 "write_zeroes": true, 00:22:58.321 "flush": true, 00:22:58.321 "reset": true, 00:22:58.321 "compare": false, 00:22:58.321 "compare_and_write": false, 00:22:58.321 "abort": true, 00:22:58.321 "nvme_admin": false, 00:22:58.321 "nvme_io": false 00:22:58.321 }, 00:22:58.321 "memory_domains": [ 00:22:58.321 { 00:22:58.321 "dma_device_id": "system", 00:22:58.321 "dma_device_type": 1 00:22:58.321 }, 00:22:58.321 { 00:22:58.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.321 "dma_device_type": 2 00:22:58.321 } 00:22:58.321 ], 00:22:58.321 "driver_specific": {} 00:22:58.321 }' 00:22:58.321 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.321 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.580 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.838 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.838 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.838 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:58.838 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.838 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.838 "name": "BaseBdev4", 00:22:58.838 "aliases": [ 00:22:58.838 "ade321bb-4d90-4093-b79c-10808791d115" 00:22:58.838 ], 00:22:58.838 "product_name": "Malloc disk", 00:22:58.838 "block_size": 512, 00:22:58.838 "num_blocks": 65536, 00:22:58.838 "uuid": "ade321bb-4d90-4093-b79c-10808791d115", 00:22:58.838 "assigned_rate_limits": { 00:22:58.838 "rw_ios_per_sec": 0, 00:22:58.838 "rw_mbytes_per_sec": 0, 00:22:58.838 "r_mbytes_per_sec": 0, 00:22:58.838 "w_mbytes_per_sec": 0 00:22:58.838 }, 00:22:58.838 "claimed": true, 00:22:58.838 "claim_type": "exclusive_write", 00:22:58.838 "zoned": false, 00:22:58.838 "supported_io_types": { 00:22:58.838 "read": true, 00:22:58.838 "write": true, 00:22:58.838 "unmap": true, 00:22:58.838 "write_zeroes": true, 00:22:58.838 "flush": true, 00:22:58.838 "reset": true, 00:22:58.838 "compare": false, 00:22:58.838 "compare_and_write": false, 00:22:58.838 "abort": true, 00:22:58.838 "nvme_admin": false, 00:22:58.838 "nvme_io": false 00:22:58.838 }, 00:22:58.838 "memory_domains": [ 00:22:58.838 { 00:22:58.838 "dma_device_id": "system", 00:22:58.838 "dma_device_type": 1 00:22:58.838 }, 00:22:58.838 { 00:22:58.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.838 "dma_device_type": 2 00:22:58.838 } 00:22:58.838 ], 00:22:58.838 "driver_specific": {} 00:22:58.838 }' 00:22:58.838 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:59.097 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.356 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.356 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:59.356 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:59.356 [2024-06-10 19:07:14.112106] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:59.356 [2024-06-10 19:07:14.112129] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.356 [2024-06-10 19:07:14.112174] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.356 [2024-06-10 19:07:14.112420] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.356 [2024-06-10 19:07:14.112431] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25ea440 name Existed_Raid, state offline 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1732069 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1732069 ']' 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 1732069 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1732069 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1732069' 00:22:59.616 killing process with pid 1732069 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 1732069 00:22:59.616 [2024-06-10 19:07:14.187286] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:59.616 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 1732069 00:22:59.616 [2024-06-10 19:07:14.218838] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:59.875 19:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:59.875 00:22:59.875 real 0m30.333s 00:22:59.875 user 0m55.621s 00:22:59.875 sys 0m5.536s 00:22:59.875 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:59.875 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.875 ************************************ 00:22:59.875 END TEST raid_state_function_test_sb 00:22:59.875 ************************************ 00:22:59.875 19:07:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:59.875 19:07:14 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:59.875 19:07:14 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:59.875 19:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:59.875 ************************************ 00:22:59.875 START TEST raid_superblock_test 00:22:59.875 ************************************ 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 4 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1737818 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1737818 /var/tmp/spdk-raid.sock 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 1737818 ']' 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:59.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:59.875 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:59.876 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 [2024-06-10 19:07:14.557694] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:22:59.876 [2024-06-10 19:07:14.557749] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737818 ] 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.0 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.1 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.2 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.3 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.4 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.5 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.6 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:01.7 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:02.0 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:02.1 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:02.2 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:02.3 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:02.4 cannot be used 00:22:59.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.876 EAL: Requested device 0000:b6:02.5 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b6:02.6 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b6:02.7 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.0 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.1 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.2 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.3 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.4 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.5 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.134 EAL: Requested device 0000:b8:01.6 cannot be used 00:23:00.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:01.7 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.0 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.1 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.2 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.3 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.4 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.5 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.6 cannot be used 00:23:00.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:00.135 EAL: Requested device 0000:b8:02.7 cannot be used 00:23:00.135 [2024-06-10 19:07:14.690573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.135 [2024-06-10 19:07:14.780216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.135 [2024-06-10 19:07:14.841394] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.135 [2024-06-10 19:07:14.841430] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:00.703 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:00.961 malloc1 00:23:00.961 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:01.220 [2024-06-10 19:07:15.890326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:01.220 [2024-06-10 19:07:15.890371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.220 [2024-06-10 19:07:15.890390] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x887b70 00:23:01.220 [2024-06-10 19:07:15.890402] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.220 [2024-06-10 19:07:15.891942] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.220 [2024-06-10 19:07:15.891970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:01.220 pt1 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:01.220 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:01.479 malloc2 00:23:01.479 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:01.738 [2024-06-10 19:07:16.344092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:01.738 [2024-06-10 19:07:16.344135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.738 [2024-06-10 19:07:16.344151] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x888f70 00:23:01.738 [2024-06-10 19:07:16.344162] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.738 [2024-06-10 19:07:16.345585] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.738 [2024-06-10 19:07:16.345611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:01.738 pt2 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:01.738 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:01.997 malloc3 00:23:01.997 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:02.257 [2024-06-10 19:07:16.801553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:02.257 [2024-06-10 19:07:16.801603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.257 [2024-06-10 19:07:16.801619] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa1f940 00:23:02.257 [2024-06-10 19:07:16.801631] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.257 [2024-06-10 19:07:16.802985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.257 [2024-06-10 19:07:16.803011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:02.257 pt3 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:02.257 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:02.516 malloc4 00:23:02.516 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:02.516 [2024-06-10 19:07:17.259081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:02.516 [2024-06-10 19:07:17.259126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.516 [2024-06-10 19:07:17.259143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x87f900 00:23:02.516 [2024-06-10 19:07:17.259155] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.516 [2024-06-10 19:07:17.260505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.516 [2024-06-10 19:07:17.260532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:02.516 pt4 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:02.776 [2024-06-10 19:07:17.471658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:02.776 [2024-06-10 19:07:17.472803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:02.776 [2024-06-10 19:07:17.472852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:02.776 [2024-06-10 19:07:17.472891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:02.776 [2024-06-10 19:07:17.473049] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x881800 00:23:02.776 [2024-06-10 19:07:17.473063] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:02.776 [2024-06-10 19:07:17.473236] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x87fb90 00:23:02.776 [2024-06-10 19:07:17.473371] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x881800 00:23:02.776 [2024-06-10 19:07:17.473381] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x881800 00:23:02.776 [2024-06-10 19:07:17.473466] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.776 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.035 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.035 "name": "raid_bdev1", 00:23:03.035 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:03.035 "strip_size_kb": 0, 00:23:03.035 "state": "online", 00:23:03.035 "raid_level": "raid1", 00:23:03.035 "superblock": true, 00:23:03.035 "num_base_bdevs": 4, 00:23:03.035 "num_base_bdevs_discovered": 4, 00:23:03.035 "num_base_bdevs_operational": 4, 00:23:03.035 "base_bdevs_list": [ 00:23:03.035 { 00:23:03.035 "name": "pt1", 00:23:03.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:03.035 "is_configured": true, 00:23:03.035 "data_offset": 2048, 00:23:03.035 "data_size": 63488 00:23:03.035 }, 00:23:03.035 { 00:23:03.035 "name": "pt2", 00:23:03.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.035 "is_configured": true, 00:23:03.035 "data_offset": 2048, 00:23:03.035 "data_size": 63488 00:23:03.035 }, 00:23:03.035 { 00:23:03.035 "name": "pt3", 00:23:03.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.035 "is_configured": true, 00:23:03.035 "data_offset": 2048, 00:23:03.035 "data_size": 63488 00:23:03.035 }, 00:23:03.035 { 00:23:03.035 "name": "pt4", 00:23:03.035 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:03.035 "is_configured": true, 00:23:03.035 "data_offset": 2048, 00:23:03.035 "data_size": 63488 00:23:03.035 } 00:23:03.035 ] 00:23:03.035 }' 00:23:03.035 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.035 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:03.603 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:03.863 [2024-06-10 19:07:18.502610] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:03.863 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:03.863 "name": "raid_bdev1", 00:23:03.863 "aliases": [ 00:23:03.863 "a5909719-add5-4fb0-9ceb-afb856053f44" 00:23:03.863 ], 00:23:03.863 "product_name": "Raid Volume", 00:23:03.863 "block_size": 512, 00:23:03.863 "num_blocks": 63488, 00:23:03.863 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:03.863 "assigned_rate_limits": { 00:23:03.863 "rw_ios_per_sec": 0, 00:23:03.863 "rw_mbytes_per_sec": 0, 00:23:03.863 "r_mbytes_per_sec": 0, 00:23:03.863 "w_mbytes_per_sec": 0 00:23:03.863 }, 00:23:03.863 "claimed": false, 00:23:03.863 "zoned": false, 00:23:03.863 "supported_io_types": { 00:23:03.863 "read": true, 00:23:03.863 "write": true, 00:23:03.863 "unmap": false, 00:23:03.863 "write_zeroes": true, 00:23:03.863 "flush": false, 00:23:03.863 "reset": true, 00:23:03.863 "compare": false, 00:23:03.863 "compare_and_write": false, 00:23:03.863 "abort": false, 00:23:03.863 "nvme_admin": false, 00:23:03.863 "nvme_io": false 00:23:03.863 }, 00:23:03.863 "memory_domains": [ 00:23:03.863 { 00:23:03.863 "dma_device_id": "system", 00:23:03.863 "dma_device_type": 1 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.863 "dma_device_type": 2 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "system", 00:23:03.863 "dma_device_type": 1 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.863 "dma_device_type": 2 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "system", 00:23:03.863 "dma_device_type": 1 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.863 "dma_device_type": 2 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "system", 00:23:03.863 "dma_device_type": 1 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.863 "dma_device_type": 2 00:23:03.863 } 00:23:03.863 ], 00:23:03.863 "driver_specific": { 00:23:03.863 "raid": { 00:23:03.863 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:03.863 "strip_size_kb": 0, 00:23:03.863 "state": "online", 00:23:03.863 "raid_level": "raid1", 00:23:03.863 "superblock": true, 00:23:03.863 "num_base_bdevs": 4, 00:23:03.863 "num_base_bdevs_discovered": 4, 00:23:03.863 "num_base_bdevs_operational": 4, 00:23:03.863 "base_bdevs_list": [ 00:23:03.863 { 00:23:03.863 "name": "pt1", 00:23:03.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:03.863 "is_configured": true, 00:23:03.863 "data_offset": 2048, 00:23:03.863 "data_size": 63488 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "name": "pt2", 00:23:03.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.863 "is_configured": true, 00:23:03.863 "data_offset": 2048, 00:23:03.863 "data_size": 63488 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "name": "pt3", 00:23:03.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.863 "is_configured": true, 00:23:03.863 "data_offset": 2048, 00:23:03.863 "data_size": 63488 00:23:03.863 }, 00:23:03.863 { 00:23:03.863 "name": "pt4", 00:23:03.863 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:03.863 "is_configured": true, 00:23:03.863 "data_offset": 2048, 00:23:03.863 "data_size": 63488 00:23:03.863 } 00:23:03.863 ] 00:23:03.863 } 00:23:03.863 } 00:23:03.863 }' 00:23:03.863 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:03.863 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:03.863 pt2 00:23:03.863 pt3 00:23:03.863 pt4' 00:23:03.863 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.863 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:03.863 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:04.123 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:04.123 "name": "pt1", 00:23:04.123 "aliases": [ 00:23:04.123 "00000000-0000-0000-0000-000000000001" 00:23:04.123 ], 00:23:04.123 "product_name": "passthru", 00:23:04.123 "block_size": 512, 00:23:04.123 "num_blocks": 65536, 00:23:04.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:04.123 "assigned_rate_limits": { 00:23:04.123 "rw_ios_per_sec": 0, 00:23:04.123 "rw_mbytes_per_sec": 0, 00:23:04.123 "r_mbytes_per_sec": 0, 00:23:04.123 "w_mbytes_per_sec": 0 00:23:04.123 }, 00:23:04.123 "claimed": true, 00:23:04.123 "claim_type": "exclusive_write", 00:23:04.123 "zoned": false, 00:23:04.123 "supported_io_types": { 00:23:04.123 "read": true, 00:23:04.123 "write": true, 00:23:04.123 "unmap": true, 00:23:04.123 "write_zeroes": true, 00:23:04.123 "flush": true, 00:23:04.123 "reset": true, 00:23:04.123 "compare": false, 00:23:04.123 "compare_and_write": false, 00:23:04.123 "abort": true, 00:23:04.123 "nvme_admin": false, 00:23:04.123 "nvme_io": false 00:23:04.123 }, 00:23:04.123 "memory_domains": [ 00:23:04.123 { 00:23:04.123 "dma_device_id": "system", 00:23:04.123 "dma_device_type": 1 00:23:04.123 }, 00:23:04.123 { 00:23:04.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.123 "dma_device_type": 2 00:23:04.123 } 00:23:04.123 ], 00:23:04.123 "driver_specific": { 00:23:04.123 "passthru": { 00:23:04.123 "name": "pt1", 00:23:04.123 "base_bdev_name": "malloc1" 00:23:04.123 } 00:23:04.123 } 00:23:04.123 }' 00:23:04.123 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.123 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.382 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:04.382 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.382 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.382 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:04.382 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.382 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.382 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:04.383 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.383 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.383 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:04.383 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:04.642 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:04.642 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:04.642 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:04.642 "name": "pt2", 00:23:04.642 "aliases": [ 00:23:04.642 "00000000-0000-0000-0000-000000000002" 00:23:04.642 ], 00:23:04.642 "product_name": "passthru", 00:23:04.642 "block_size": 512, 00:23:04.642 "num_blocks": 65536, 00:23:04.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:04.642 "assigned_rate_limits": { 00:23:04.642 "rw_ios_per_sec": 0, 00:23:04.642 "rw_mbytes_per_sec": 0, 00:23:04.642 "r_mbytes_per_sec": 0, 00:23:04.642 "w_mbytes_per_sec": 0 00:23:04.642 }, 00:23:04.642 "claimed": true, 00:23:04.642 "claim_type": "exclusive_write", 00:23:04.642 "zoned": false, 00:23:04.642 "supported_io_types": { 00:23:04.642 "read": true, 00:23:04.642 "write": true, 00:23:04.642 "unmap": true, 00:23:04.642 "write_zeroes": true, 00:23:04.642 "flush": true, 00:23:04.642 "reset": true, 00:23:04.642 "compare": false, 00:23:04.642 "compare_and_write": false, 00:23:04.642 "abort": true, 00:23:04.642 "nvme_admin": false, 00:23:04.642 "nvme_io": false 00:23:04.642 }, 00:23:04.642 "memory_domains": [ 00:23:04.642 { 00:23:04.642 "dma_device_id": "system", 00:23:04.642 "dma_device_type": 1 00:23:04.642 }, 00:23:04.642 { 00:23:04.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.642 "dma_device_type": 2 00:23:04.642 } 00:23:04.642 ], 00:23:04.642 "driver_specific": { 00:23:04.642 "passthru": { 00:23:04.642 "name": "pt2", 00:23:04.642 "base_bdev_name": "malloc2" 00:23:04.642 } 00:23:04.642 } 00:23:04.642 }' 00:23:04.642 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:04.902 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:05.172 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:05.172 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:05.172 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:05.172 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:05.172 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:05.502 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:05.502 "name": "pt3", 00:23:05.502 "aliases": [ 00:23:05.502 "00000000-0000-0000-0000-000000000003" 00:23:05.502 ], 00:23:05.502 "product_name": "passthru", 00:23:05.502 "block_size": 512, 00:23:05.502 "num_blocks": 65536, 00:23:05.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:05.502 "assigned_rate_limits": { 00:23:05.502 "rw_ios_per_sec": 0, 00:23:05.502 "rw_mbytes_per_sec": 0, 00:23:05.502 "r_mbytes_per_sec": 0, 00:23:05.502 "w_mbytes_per_sec": 0 00:23:05.502 }, 00:23:05.502 "claimed": true, 00:23:05.502 "claim_type": "exclusive_write", 00:23:05.502 "zoned": false, 00:23:05.502 "supported_io_types": { 00:23:05.502 "read": true, 00:23:05.502 "write": true, 00:23:05.502 "unmap": true, 00:23:05.502 "write_zeroes": true, 00:23:05.502 "flush": true, 00:23:05.502 "reset": true, 00:23:05.502 "compare": false, 00:23:05.502 "compare_and_write": false, 00:23:05.502 "abort": true, 00:23:05.502 "nvme_admin": false, 00:23:05.502 "nvme_io": false 00:23:05.502 }, 00:23:05.502 "memory_domains": [ 00:23:05.502 { 00:23:05.502 "dma_device_id": "system", 00:23:05.502 "dma_device_type": 1 00:23:05.502 }, 00:23:05.502 { 00:23:05.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.502 "dma_device_type": 2 00:23:05.502 } 00:23:05.502 ], 00:23:05.502 "driver_specific": { 00:23:05.502 "passthru": { 00:23:05.502 "name": "pt3", 00:23:05.502 "base_bdev_name": "malloc3" 00:23:05.502 } 00:23:05.502 } 00:23:05.502 }' 00:23:05.502 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:05.502 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:05.502 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:05.761 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:05.761 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:05.761 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:05.761 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:05.761 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:05.761 "name": "pt4", 00:23:05.761 "aliases": [ 00:23:05.761 "00000000-0000-0000-0000-000000000004" 00:23:05.761 ], 00:23:05.761 "product_name": "passthru", 00:23:05.761 "block_size": 512, 00:23:05.761 "num_blocks": 65536, 00:23:05.761 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:05.761 "assigned_rate_limits": { 00:23:05.761 "rw_ios_per_sec": 0, 00:23:05.761 "rw_mbytes_per_sec": 0, 00:23:05.761 "r_mbytes_per_sec": 0, 00:23:05.761 "w_mbytes_per_sec": 0 00:23:05.761 }, 00:23:05.761 "claimed": true, 00:23:05.761 "claim_type": "exclusive_write", 00:23:05.761 "zoned": false, 00:23:05.761 "supported_io_types": { 00:23:05.761 "read": true, 00:23:05.761 "write": true, 00:23:05.761 "unmap": true, 00:23:05.762 "write_zeroes": true, 00:23:05.762 "flush": true, 00:23:05.762 "reset": true, 00:23:05.762 "compare": false, 00:23:05.762 "compare_and_write": false, 00:23:05.762 "abort": true, 00:23:05.762 "nvme_admin": false, 00:23:05.762 "nvme_io": false 00:23:05.762 }, 00:23:05.762 "memory_domains": [ 00:23:05.762 { 00:23:05.762 "dma_device_id": "system", 00:23:05.762 "dma_device_type": 1 00:23:05.762 }, 00:23:05.762 { 00:23:05.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.762 "dma_device_type": 2 00:23:05.762 } 00:23:05.762 ], 00:23:05.762 "driver_specific": { 00:23:05.762 "passthru": { 00:23:05.762 "name": "pt4", 00:23:05.762 "base_bdev_name": "malloc4" 00:23:05.762 } 00:23:05.762 } 00:23:05.762 }' 00:23:05.762 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:06.021 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:06.280 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:06.280 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:06.280 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:06.280 19:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:06.539 [2024-06-10 19:07:21.045295] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:06.539 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a5909719-add5-4fb0-9ceb-afb856053f44 00:23:06.539 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a5909719-add5-4fb0-9ceb-afb856053f44 ']' 00:23:06.539 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:06.539 [2024-06-10 19:07:21.273657] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:06.539 [2024-06-10 19:07:21.273675] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:06.539 [2024-06-10 19:07:21.273717] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:06.540 [2024-06-10 19:07:21.273792] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:06.540 [2024-06-10 19:07:21.273803] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x881800 name raid_bdev1, state offline 00:23:06.540 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.540 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:06.798 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:06.799 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:06.799 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:06.799 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:07.057 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:07.057 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:07.317 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:07.317 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:07.576 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:07.576 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:07.836 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:07.836 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:08.096 [2024-06-10 19:07:22.809632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:08.096 [2024-06-10 19:07:22.810878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:08.096 [2024-06-10 19:07:22.810918] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:08.096 [2024-06-10 19:07:22.810950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:08.096 [2024-06-10 19:07:22.810989] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:08.096 [2024-06-10 19:07:22.811025] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:08.096 [2024-06-10 19:07:22.811046] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:08.096 [2024-06-10 19:07:22.811066] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:08.096 [2024-06-10 19:07:22.811083] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.096 [2024-06-10 19:07:22.811091] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x888010 name raid_bdev1, state configuring 00:23:08.096 request: 00:23:08.096 { 00:23:08.096 "name": "raid_bdev1", 00:23:08.096 "raid_level": "raid1", 00:23:08.096 "base_bdevs": [ 00:23:08.096 "malloc1", 00:23:08.096 "malloc2", 00:23:08.096 "malloc3", 00:23:08.096 "malloc4" 00:23:08.096 ], 00:23:08.096 "superblock": false, 00:23:08.096 "method": "bdev_raid_create", 00:23:08.096 "req_id": 1 00:23:08.096 } 00:23:08.096 Got JSON-RPC error response 00:23:08.096 response: 00:23:08.096 { 00:23:08.096 "code": -17, 00:23:08.096 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:08.096 } 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.096 19:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:08.355 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:08.355 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:08.355 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:08.615 [2024-06-10 19:07:23.262765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:08.615 [2024-06-10 19:07:23.262803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.615 [2024-06-10 19:07:23.262820] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x880900 00:23:08.615 [2024-06-10 19:07:23.262832] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.615 [2024-06-10 19:07:23.264295] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.615 [2024-06-10 19:07:23.264323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:08.615 [2024-06-10 19:07:23.264382] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:08.615 [2024-06-10 19:07:23.264407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:08.615 pt1 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.615 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.875 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.875 "name": "raid_bdev1", 00:23:08.875 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:08.875 "strip_size_kb": 0, 00:23:08.875 "state": "configuring", 00:23:08.875 "raid_level": "raid1", 00:23:08.875 "superblock": true, 00:23:08.875 "num_base_bdevs": 4, 00:23:08.875 "num_base_bdevs_discovered": 1, 00:23:08.875 "num_base_bdevs_operational": 4, 00:23:08.875 "base_bdevs_list": [ 00:23:08.875 { 00:23:08.875 "name": "pt1", 00:23:08.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.875 "is_configured": true, 00:23:08.875 "data_offset": 2048, 00:23:08.875 "data_size": 63488 00:23:08.875 }, 00:23:08.875 { 00:23:08.875 "name": null, 00:23:08.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.875 "is_configured": false, 00:23:08.875 "data_offset": 2048, 00:23:08.875 "data_size": 63488 00:23:08.875 }, 00:23:08.875 { 00:23:08.875 "name": null, 00:23:08.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:08.875 "is_configured": false, 00:23:08.875 "data_offset": 2048, 00:23:08.875 "data_size": 63488 00:23:08.875 }, 00:23:08.875 { 00:23:08.875 "name": null, 00:23:08.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:08.875 "is_configured": false, 00:23:08.875 "data_offset": 2048, 00:23:08.875 "data_size": 63488 00:23:08.875 } 00:23:08.875 ] 00:23:08.875 }' 00:23:08.875 19:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.875 19:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:23:09.443 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:09.703 [2024-06-10 19:07:24.289474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:09.703 [2024-06-10 19:07:24.289523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.703 [2024-06-10 19:07:24.289540] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x87f530 00:23:09.703 [2024-06-10 19:07:24.289552] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.703 [2024-06-10 19:07:24.289861] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.703 [2024-06-10 19:07:24.289876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:09.703 [2024-06-10 19:07:24.289933] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:09.703 [2024-06-10 19:07:24.289950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:09.703 pt2 00:23:09.703 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:09.962 [2024-06-10 19:07:24.514070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.962 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.222 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.222 "name": "raid_bdev1", 00:23:10.222 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:10.222 "strip_size_kb": 0, 00:23:10.222 "state": "configuring", 00:23:10.222 "raid_level": "raid1", 00:23:10.222 "superblock": true, 00:23:10.222 "num_base_bdevs": 4, 00:23:10.222 "num_base_bdevs_discovered": 1, 00:23:10.222 "num_base_bdevs_operational": 4, 00:23:10.222 "base_bdevs_list": [ 00:23:10.222 { 00:23:10.222 "name": "pt1", 00:23:10.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.222 "is_configured": true, 00:23:10.222 "data_offset": 2048, 00:23:10.222 "data_size": 63488 00:23:10.222 }, 00:23:10.222 { 00:23:10.222 "name": null, 00:23:10.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.222 "is_configured": false, 00:23:10.222 "data_offset": 2048, 00:23:10.222 "data_size": 63488 00:23:10.222 }, 00:23:10.222 { 00:23:10.222 "name": null, 00:23:10.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:10.222 "is_configured": false, 00:23:10.222 "data_offset": 2048, 00:23:10.222 "data_size": 63488 00:23:10.222 }, 00:23:10.222 { 00:23:10.222 "name": null, 00:23:10.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:10.222 "is_configured": false, 00:23:10.222 "data_offset": 2048, 00:23:10.222 "data_size": 63488 00:23:10.222 } 00:23:10.222 ] 00:23:10.222 }' 00:23:10.222 19:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.222 19:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.790 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:10.790 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:10.790 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:10.790 [2024-06-10 19:07:25.492654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:10.790 [2024-06-10 19:07:25.492704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.791 [2024-06-10 19:07:25.492721] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x87f140 00:23:10.791 [2024-06-10 19:07:25.492733] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.791 [2024-06-10 19:07:25.493037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.791 [2024-06-10 19:07:25.493052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:10.791 [2024-06-10 19:07:25.493106] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:10.791 [2024-06-10 19:07:25.493123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:10.791 pt2 00:23:10.791 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:10.791 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:10.791 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:11.050 [2024-06-10 19:07:25.665113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:11.050 [2024-06-10 19:07:25.665144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.050 [2024-06-10 19:07:25.665160] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa22500 00:23:11.050 [2024-06-10 19:07:25.665171] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.050 [2024-06-10 19:07:25.665424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.050 [2024-06-10 19:07:25.665440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:11.050 [2024-06-10 19:07:25.665485] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:11.050 [2024-06-10 19:07:25.665500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:11.050 pt3 00:23:11.050 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:11.050 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:11.050 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:11.309 [2024-06-10 19:07:25.857625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:11.309 [2024-06-10 19:07:25.857650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.309 [2024-06-10 19:07:25.857664] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x87e880 00:23:11.309 [2024-06-10 19:07:25.857675] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.309 [2024-06-10 19:07:25.857909] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.309 [2024-06-10 19:07:25.857925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:11.309 [2024-06-10 19:07:25.857967] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:11.309 [2024-06-10 19:07:25.857981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:11.309 [2024-06-10 19:07:25.858084] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x880ba0 00:23:11.309 [2024-06-10 19:07:25.858093] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:11.309 [2024-06-10 19:07:25.858241] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8864a0 00:23:11.309 [2024-06-10 19:07:25.858360] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x880ba0 00:23:11.309 [2024-06-10 19:07:25.858369] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x880ba0 00:23:11.309 [2024-06-10 19:07:25.858453] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.309 pt4 00:23:11.309 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:11.309 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:11.309 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:11.309 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.309 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.309 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.310 19:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.569 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.569 "name": "raid_bdev1", 00:23:11.569 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:11.569 "strip_size_kb": 0, 00:23:11.569 "state": "online", 00:23:11.569 "raid_level": "raid1", 00:23:11.569 "superblock": true, 00:23:11.569 "num_base_bdevs": 4, 00:23:11.569 "num_base_bdevs_discovered": 4, 00:23:11.569 "num_base_bdevs_operational": 4, 00:23:11.569 "base_bdevs_list": [ 00:23:11.569 { 00:23:11.569 "name": "pt1", 00:23:11.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:11.569 "is_configured": true, 00:23:11.569 "data_offset": 2048, 00:23:11.569 "data_size": 63488 00:23:11.569 }, 00:23:11.569 { 00:23:11.569 "name": "pt2", 00:23:11.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.569 "is_configured": true, 00:23:11.569 "data_offset": 2048, 00:23:11.569 "data_size": 63488 00:23:11.569 }, 00:23:11.569 { 00:23:11.569 "name": "pt3", 00:23:11.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:11.569 "is_configured": true, 00:23:11.569 "data_offset": 2048, 00:23:11.569 "data_size": 63488 00:23:11.569 }, 00:23:11.569 { 00:23:11.569 "name": "pt4", 00:23:11.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:11.569 "is_configured": true, 00:23:11.569 "data_offset": 2048, 00:23:11.569 "data_size": 63488 00:23:11.569 } 00:23:11.569 ] 00:23:11.569 }' 00:23:11.569 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.569 19:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:12.138 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:12.397 [2024-06-10 19:07:26.904647] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.397 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:12.397 "name": "raid_bdev1", 00:23:12.397 "aliases": [ 00:23:12.397 "a5909719-add5-4fb0-9ceb-afb856053f44" 00:23:12.397 ], 00:23:12.397 "product_name": "Raid Volume", 00:23:12.397 "block_size": 512, 00:23:12.397 "num_blocks": 63488, 00:23:12.397 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:12.397 "assigned_rate_limits": { 00:23:12.397 "rw_ios_per_sec": 0, 00:23:12.397 "rw_mbytes_per_sec": 0, 00:23:12.397 "r_mbytes_per_sec": 0, 00:23:12.397 "w_mbytes_per_sec": 0 00:23:12.397 }, 00:23:12.397 "claimed": false, 00:23:12.397 "zoned": false, 00:23:12.397 "supported_io_types": { 00:23:12.397 "read": true, 00:23:12.397 "write": true, 00:23:12.397 "unmap": false, 00:23:12.397 "write_zeroes": true, 00:23:12.397 "flush": false, 00:23:12.397 "reset": true, 00:23:12.397 "compare": false, 00:23:12.397 "compare_and_write": false, 00:23:12.397 "abort": false, 00:23:12.397 "nvme_admin": false, 00:23:12.397 "nvme_io": false 00:23:12.397 }, 00:23:12.397 "memory_domains": [ 00:23:12.397 { 00:23:12.397 "dma_device_id": "system", 00:23:12.397 "dma_device_type": 1 00:23:12.397 }, 00:23:12.397 { 00:23:12.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.397 "dma_device_type": 2 00:23:12.397 }, 00:23:12.397 { 00:23:12.397 "dma_device_id": "system", 00:23:12.397 "dma_device_type": 1 00:23:12.397 }, 00:23:12.397 { 00:23:12.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.397 "dma_device_type": 2 00:23:12.397 }, 00:23:12.397 { 00:23:12.397 "dma_device_id": "system", 00:23:12.397 "dma_device_type": 1 00:23:12.397 }, 00:23:12.397 { 00:23:12.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.397 "dma_device_type": 2 00:23:12.397 }, 00:23:12.398 { 00:23:12.398 "dma_device_id": "system", 00:23:12.398 "dma_device_type": 1 00:23:12.398 }, 00:23:12.398 { 00:23:12.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.398 "dma_device_type": 2 00:23:12.398 } 00:23:12.398 ], 00:23:12.398 "driver_specific": { 00:23:12.398 "raid": { 00:23:12.398 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:12.398 "strip_size_kb": 0, 00:23:12.398 "state": "online", 00:23:12.398 "raid_level": "raid1", 00:23:12.398 "superblock": true, 00:23:12.398 "num_base_bdevs": 4, 00:23:12.398 "num_base_bdevs_discovered": 4, 00:23:12.398 "num_base_bdevs_operational": 4, 00:23:12.398 "base_bdevs_list": [ 00:23:12.398 { 00:23:12.398 "name": "pt1", 00:23:12.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:12.398 "is_configured": true, 00:23:12.398 "data_offset": 2048, 00:23:12.398 "data_size": 63488 00:23:12.398 }, 00:23:12.398 { 00:23:12.398 "name": "pt2", 00:23:12.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.398 "is_configured": true, 00:23:12.398 "data_offset": 2048, 00:23:12.398 "data_size": 63488 00:23:12.398 }, 00:23:12.398 { 00:23:12.398 "name": "pt3", 00:23:12.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:12.398 "is_configured": true, 00:23:12.398 "data_offset": 2048, 00:23:12.398 "data_size": 63488 00:23:12.398 }, 00:23:12.398 { 00:23:12.398 "name": "pt4", 00:23:12.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:12.398 "is_configured": true, 00:23:12.398 "data_offset": 2048, 00:23:12.398 "data_size": 63488 00:23:12.398 } 00:23:12.398 ] 00:23:12.398 } 00:23:12.398 } 00:23:12.398 }' 00:23:12.398 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:12.398 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:12.398 pt2 00:23:12.398 pt3 00:23:12.398 pt4' 00:23:12.398 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.398 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:12.398 19:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.657 "name": "pt1", 00:23:12.657 "aliases": [ 00:23:12.657 "00000000-0000-0000-0000-000000000001" 00:23:12.657 ], 00:23:12.657 "product_name": "passthru", 00:23:12.657 "block_size": 512, 00:23:12.657 "num_blocks": 65536, 00:23:12.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:12.657 "assigned_rate_limits": { 00:23:12.657 "rw_ios_per_sec": 0, 00:23:12.657 "rw_mbytes_per_sec": 0, 00:23:12.657 "r_mbytes_per_sec": 0, 00:23:12.657 "w_mbytes_per_sec": 0 00:23:12.657 }, 00:23:12.657 "claimed": true, 00:23:12.657 "claim_type": "exclusive_write", 00:23:12.657 "zoned": false, 00:23:12.657 "supported_io_types": { 00:23:12.657 "read": true, 00:23:12.657 "write": true, 00:23:12.657 "unmap": true, 00:23:12.657 "write_zeroes": true, 00:23:12.657 "flush": true, 00:23:12.657 "reset": true, 00:23:12.657 "compare": false, 00:23:12.657 "compare_and_write": false, 00:23:12.657 "abort": true, 00:23:12.657 "nvme_admin": false, 00:23:12.657 "nvme_io": false 00:23:12.657 }, 00:23:12.657 "memory_domains": [ 00:23:12.657 { 00:23:12.657 "dma_device_id": "system", 00:23:12.657 "dma_device_type": 1 00:23:12.657 }, 00:23:12.657 { 00:23:12.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.657 "dma_device_type": 2 00:23:12.657 } 00:23:12.657 ], 00:23:12.657 "driver_specific": { 00:23:12.657 "passthru": { 00:23:12.657 "name": "pt1", 00:23:12.657 "base_bdev_name": "malloc1" 00:23:12.657 } 00:23:12.657 } 00:23:12.657 }' 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.657 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.916 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.916 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.916 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.917 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.917 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.917 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:12.917 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:13.176 "name": "pt2", 00:23:13.176 "aliases": [ 00:23:13.176 "00000000-0000-0000-0000-000000000002" 00:23:13.176 ], 00:23:13.176 "product_name": "passthru", 00:23:13.176 "block_size": 512, 00:23:13.176 "num_blocks": 65536, 00:23:13.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.176 "assigned_rate_limits": { 00:23:13.176 "rw_ios_per_sec": 0, 00:23:13.176 "rw_mbytes_per_sec": 0, 00:23:13.176 "r_mbytes_per_sec": 0, 00:23:13.176 "w_mbytes_per_sec": 0 00:23:13.176 }, 00:23:13.176 "claimed": true, 00:23:13.176 "claim_type": "exclusive_write", 00:23:13.176 "zoned": false, 00:23:13.176 "supported_io_types": { 00:23:13.176 "read": true, 00:23:13.176 "write": true, 00:23:13.176 "unmap": true, 00:23:13.176 "write_zeroes": true, 00:23:13.176 "flush": true, 00:23:13.176 "reset": true, 00:23:13.176 "compare": false, 00:23:13.176 "compare_and_write": false, 00:23:13.176 "abort": true, 00:23:13.176 "nvme_admin": false, 00:23:13.176 "nvme_io": false 00:23:13.176 }, 00:23:13.176 "memory_domains": [ 00:23:13.176 { 00:23:13.176 "dma_device_id": "system", 00:23:13.176 "dma_device_type": 1 00:23:13.176 }, 00:23:13.176 { 00:23:13.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.176 "dma_device_type": 2 00:23:13.176 } 00:23:13.176 ], 00:23:13.176 "driver_specific": { 00:23:13.176 "passthru": { 00:23:13.176 "name": "pt2", 00:23:13.176 "base_bdev_name": "malloc2" 00:23:13.176 } 00:23:13.176 } 00:23:13.176 }' 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:13.176 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.436 19:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:13.436 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.695 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:13.696 "name": "pt3", 00:23:13.696 "aliases": [ 00:23:13.696 "00000000-0000-0000-0000-000000000003" 00:23:13.696 ], 00:23:13.696 "product_name": "passthru", 00:23:13.696 "block_size": 512, 00:23:13.696 "num_blocks": 65536, 00:23:13.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:13.696 "assigned_rate_limits": { 00:23:13.696 "rw_ios_per_sec": 0, 00:23:13.696 "rw_mbytes_per_sec": 0, 00:23:13.696 "r_mbytes_per_sec": 0, 00:23:13.696 "w_mbytes_per_sec": 0 00:23:13.696 }, 00:23:13.696 "claimed": true, 00:23:13.696 "claim_type": "exclusive_write", 00:23:13.696 "zoned": false, 00:23:13.696 "supported_io_types": { 00:23:13.696 "read": true, 00:23:13.696 "write": true, 00:23:13.696 "unmap": true, 00:23:13.696 "write_zeroes": true, 00:23:13.696 "flush": true, 00:23:13.696 "reset": true, 00:23:13.696 "compare": false, 00:23:13.696 "compare_and_write": false, 00:23:13.696 "abort": true, 00:23:13.696 "nvme_admin": false, 00:23:13.696 "nvme_io": false 00:23:13.696 }, 00:23:13.696 "memory_domains": [ 00:23:13.696 { 00:23:13.696 "dma_device_id": "system", 00:23:13.696 "dma_device_type": 1 00:23:13.696 }, 00:23:13.696 { 00:23:13.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.696 "dma_device_type": 2 00:23:13.696 } 00:23:13.696 ], 00:23:13.696 "driver_specific": { 00:23:13.696 "passthru": { 00:23:13.696 "name": "pt3", 00:23:13.696 "base_bdev_name": "malloc3" 00:23:13.696 } 00:23:13.696 } 00:23:13.696 }' 00:23:13.696 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.696 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.696 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:13.696 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.696 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.955 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:14.213 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:14.213 "name": "pt4", 00:23:14.213 "aliases": [ 00:23:14.213 "00000000-0000-0000-0000-000000000004" 00:23:14.213 ], 00:23:14.213 "product_name": "passthru", 00:23:14.213 "block_size": 512, 00:23:14.213 "num_blocks": 65536, 00:23:14.213 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:14.213 "assigned_rate_limits": { 00:23:14.213 "rw_ios_per_sec": 0, 00:23:14.213 "rw_mbytes_per_sec": 0, 00:23:14.213 "r_mbytes_per_sec": 0, 00:23:14.213 "w_mbytes_per_sec": 0 00:23:14.213 }, 00:23:14.213 "claimed": true, 00:23:14.213 "claim_type": "exclusive_write", 00:23:14.213 "zoned": false, 00:23:14.213 "supported_io_types": { 00:23:14.213 "read": true, 00:23:14.213 "write": true, 00:23:14.213 "unmap": true, 00:23:14.213 "write_zeroes": true, 00:23:14.213 "flush": true, 00:23:14.213 "reset": true, 00:23:14.213 "compare": false, 00:23:14.213 "compare_and_write": false, 00:23:14.213 "abort": true, 00:23:14.213 "nvme_admin": false, 00:23:14.213 "nvme_io": false 00:23:14.213 }, 00:23:14.213 "memory_domains": [ 00:23:14.213 { 00:23:14.213 "dma_device_id": "system", 00:23:14.213 "dma_device_type": 1 00:23:14.213 }, 00:23:14.213 { 00:23:14.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.213 "dma_device_type": 2 00:23:14.213 } 00:23:14.213 ], 00:23:14.213 "driver_specific": { 00:23:14.213 "passthru": { 00:23:14.213 "name": "pt4", 00:23:14.213 "base_bdev_name": "malloc4" 00:23:14.213 } 00:23:14.213 } 00:23:14.213 }' 00:23:14.213 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.213 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.213 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:14.213 19:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:14.471 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:14.730 [2024-06-10 19:07:29.339228] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.730 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a5909719-add5-4fb0-9ceb-afb856053f44 '!=' a5909719-add5-4fb0-9ceb-afb856053f44 ']' 00:23:14.730 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:14.730 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:14.730 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:14.731 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:14.989 [2024-06-10 19:07:29.567612] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:14.989 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:14.989 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.989 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.989 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.989 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.989 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:14.990 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.990 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.990 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.990 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.990 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.990 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.249 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.249 "name": "raid_bdev1", 00:23:15.249 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:15.249 "strip_size_kb": 0, 00:23:15.249 "state": "online", 00:23:15.249 "raid_level": "raid1", 00:23:15.249 "superblock": true, 00:23:15.249 "num_base_bdevs": 4, 00:23:15.249 "num_base_bdevs_discovered": 3, 00:23:15.249 "num_base_bdevs_operational": 3, 00:23:15.249 "base_bdevs_list": [ 00:23:15.249 { 00:23:15.249 "name": null, 00:23:15.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.249 "is_configured": false, 00:23:15.249 "data_offset": 2048, 00:23:15.249 "data_size": 63488 00:23:15.249 }, 00:23:15.249 { 00:23:15.249 "name": "pt2", 00:23:15.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.249 "is_configured": true, 00:23:15.249 "data_offset": 2048, 00:23:15.249 "data_size": 63488 00:23:15.249 }, 00:23:15.249 { 00:23:15.249 "name": "pt3", 00:23:15.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:15.249 "is_configured": true, 00:23:15.249 "data_offset": 2048, 00:23:15.249 "data_size": 63488 00:23:15.249 }, 00:23:15.249 { 00:23:15.249 "name": "pt4", 00:23:15.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:15.249 "is_configured": true, 00:23:15.249 "data_offset": 2048, 00:23:15.249 "data_size": 63488 00:23:15.249 } 00:23:15.249 ] 00:23:15.249 }' 00:23:15.249 19:07:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.249 19:07:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.818 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:16.077 [2024-06-10 19:07:30.590280] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.077 [2024-06-10 19:07:30.590305] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.077 [2024-06-10 19:07:30.590350] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.077 [2024-06-10 19:07:30.590409] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.077 [2024-06-10 19:07:30.590425] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x880ba0 name raid_bdev1, state offline 00:23:16.077 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:16.077 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.336 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:16.337 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:16.337 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:16.337 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:16.337 19:07:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:16.337 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:16.337 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:16.337 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:16.596 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:16.596 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:16.596 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:16.855 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:16.855 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:16.855 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:16.855 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:16.855 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.114 [2024-06-10 19:07:31.733233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.114 [2024-06-10 19:07:31.733273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.114 [2024-06-10 19:07:31.733289] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x887da0 00:23:17.114 [2024-06-10 19:07:31.733301] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.114 [2024-06-10 19:07:31.734777] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.114 [2024-06-10 19:07:31.734804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.114 [2024-06-10 19:07:31.734860] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:17.114 [2024-06-10 19:07:31.734883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.114 pt2 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.114 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.373 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.373 "name": "raid_bdev1", 00:23:17.373 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:17.373 "strip_size_kb": 0, 00:23:17.373 "state": "configuring", 00:23:17.373 "raid_level": "raid1", 00:23:17.374 "superblock": true, 00:23:17.374 "num_base_bdevs": 4, 00:23:17.374 "num_base_bdevs_discovered": 1, 00:23:17.374 "num_base_bdevs_operational": 3, 00:23:17.374 "base_bdevs_list": [ 00:23:17.374 { 00:23:17.374 "name": null, 00:23:17.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.374 "is_configured": false, 00:23:17.374 "data_offset": 2048, 00:23:17.374 "data_size": 63488 00:23:17.374 }, 00:23:17.374 { 00:23:17.374 "name": "pt2", 00:23:17.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.374 "is_configured": true, 00:23:17.374 "data_offset": 2048, 00:23:17.374 "data_size": 63488 00:23:17.374 }, 00:23:17.374 { 00:23:17.374 "name": null, 00:23:17.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:17.374 "is_configured": false, 00:23:17.374 "data_offset": 2048, 00:23:17.374 "data_size": 63488 00:23:17.374 }, 00:23:17.374 { 00:23:17.374 "name": null, 00:23:17.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:17.374 "is_configured": false, 00:23:17.374 "data_offset": 2048, 00:23:17.374 "data_size": 63488 00:23:17.374 } 00:23:17.374 ] 00:23:17.374 }' 00:23:17.374 19:07:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.374 19:07:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.940 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:17.940 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:17.940 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:18.199 [2024-06-10 19:07:32.771959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:18.199 [2024-06-10 19:07:32.771999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.199 [2024-06-10 19:07:32.772016] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x880e20 00:23:18.199 [2024-06-10 19:07:32.772027] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.199 [2024-06-10 19:07:32.772326] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.199 [2024-06-10 19:07:32.772341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:18.199 [2024-06-10 19:07:32.772393] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:18.199 [2024-06-10 19:07:32.772410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:18.199 pt3 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.199 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.200 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.200 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.459 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.459 "name": "raid_bdev1", 00:23:18.459 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:18.459 "strip_size_kb": 0, 00:23:18.459 "state": "configuring", 00:23:18.459 "raid_level": "raid1", 00:23:18.459 "superblock": true, 00:23:18.459 "num_base_bdevs": 4, 00:23:18.459 "num_base_bdevs_discovered": 2, 00:23:18.459 "num_base_bdevs_operational": 3, 00:23:18.459 "base_bdevs_list": [ 00:23:18.459 { 00:23:18.459 "name": null, 00:23:18.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.459 "is_configured": false, 00:23:18.459 "data_offset": 2048, 00:23:18.459 "data_size": 63488 00:23:18.459 }, 00:23:18.459 { 00:23:18.459 "name": "pt2", 00:23:18.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.459 "is_configured": true, 00:23:18.459 "data_offset": 2048, 00:23:18.459 "data_size": 63488 00:23:18.459 }, 00:23:18.459 { 00:23:18.459 "name": "pt3", 00:23:18.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:18.459 "is_configured": true, 00:23:18.459 "data_offset": 2048, 00:23:18.459 "data_size": 63488 00:23:18.459 }, 00:23:18.459 { 00:23:18.459 "name": null, 00:23:18.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:18.459 "is_configured": false, 00:23:18.459 "data_offset": 2048, 00:23:18.459 "data_size": 63488 00:23:18.459 } 00:23:18.459 ] 00:23:18.459 }' 00:23:18.459 19:07:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.459 19:07:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:19.027 [2024-06-10 19:07:33.750544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:19.027 [2024-06-10 19:07:33.750593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.027 [2024-06-10 19:07:33.750611] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa21d00 00:23:19.027 [2024-06-10 19:07:33.750623] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.027 [2024-06-10 19:07:33.750931] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.027 [2024-06-10 19:07:33.750947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:19.027 [2024-06-10 19:07:33.751004] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:19.027 [2024-06-10 19:07:33.751021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:19.027 [2024-06-10 19:07:33.751121] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x87e880 00:23:19.027 [2024-06-10 19:07:33.751130] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:19.027 [2024-06-10 19:07:33.751283] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x87feb0 00:23:19.027 [2024-06-10 19:07:33.751403] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x87e880 00:23:19.027 [2024-06-10 19:07:33.751412] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x87e880 00:23:19.027 [2024-06-10 19:07:33.751499] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.027 pt4 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.027 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.028 19:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.287 19:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.287 "name": "raid_bdev1", 00:23:19.287 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:19.287 "strip_size_kb": 0, 00:23:19.287 "state": "online", 00:23:19.287 "raid_level": "raid1", 00:23:19.287 "superblock": true, 00:23:19.287 "num_base_bdevs": 4, 00:23:19.287 "num_base_bdevs_discovered": 3, 00:23:19.287 "num_base_bdevs_operational": 3, 00:23:19.287 "base_bdevs_list": [ 00:23:19.287 { 00:23:19.287 "name": null, 00:23:19.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.287 "is_configured": false, 00:23:19.287 "data_offset": 2048, 00:23:19.287 "data_size": 63488 00:23:19.287 }, 00:23:19.287 { 00:23:19.287 "name": "pt2", 00:23:19.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.287 "is_configured": true, 00:23:19.287 "data_offset": 2048, 00:23:19.287 "data_size": 63488 00:23:19.287 }, 00:23:19.287 { 00:23:19.287 "name": "pt3", 00:23:19.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:19.287 "is_configured": true, 00:23:19.287 "data_offset": 2048, 00:23:19.287 "data_size": 63488 00:23:19.287 }, 00:23:19.287 { 00:23:19.287 "name": "pt4", 00:23:19.287 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:19.287 "is_configured": true, 00:23:19.287 "data_offset": 2048, 00:23:19.287 "data_size": 63488 00:23:19.287 } 00:23:19.287 ] 00:23:19.287 }' 00:23:19.287 19:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.287 19:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.855 19:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:20.113 [2024-06-10 19:07:34.797268] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:20.113 [2024-06-10 19:07:34.797290] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.113 [2024-06-10 19:07:34.797338] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.113 [2024-06-10 19:07:34.797396] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.113 [2024-06-10 19:07:34.797406] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x87e880 name raid_bdev1, state offline 00:23:20.114 19:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.114 19:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:20.372 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:20.372 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:20.372 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:23:20.372 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:23:20.373 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:20.632 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:20.891 [2024-06-10 19:07:35.487059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:20.891 [2024-06-10 19:07:35.487101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.891 [2024-06-10 19:07:35.487116] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa20a80 00:23:20.891 [2024-06-10 19:07:35.487128] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.891 [2024-06-10 19:07:35.488620] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.891 [2024-06-10 19:07:35.488653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:20.891 [2024-06-10 19:07:35.488711] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:20.891 [2024-06-10 19:07:35.488734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:20.891 [2024-06-10 19:07:35.488823] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:20.891 [2024-06-10 19:07:35.488834] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:20.891 [2024-06-10 19:07:35.488847] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa221c0 name raid_bdev1, state configuring 00:23:20.891 [2024-06-10 19:07:35.488868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:20.891 [2024-06-10 19:07:35.488934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:20.891 pt1 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.891 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.892 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.892 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.151 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.151 "name": "raid_bdev1", 00:23:21.151 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:21.151 "strip_size_kb": 0, 00:23:21.151 "state": "configuring", 00:23:21.151 "raid_level": "raid1", 00:23:21.151 "superblock": true, 00:23:21.151 "num_base_bdevs": 4, 00:23:21.151 "num_base_bdevs_discovered": 2, 00:23:21.151 "num_base_bdevs_operational": 3, 00:23:21.151 "base_bdevs_list": [ 00:23:21.151 { 00:23:21.151 "name": null, 00:23:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.151 "is_configured": false, 00:23:21.151 "data_offset": 2048, 00:23:21.151 "data_size": 63488 00:23:21.151 }, 00:23:21.151 { 00:23:21.151 "name": "pt2", 00:23:21.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.151 "is_configured": true, 00:23:21.151 "data_offset": 2048, 00:23:21.151 "data_size": 63488 00:23:21.151 }, 00:23:21.151 { 00:23:21.151 "name": "pt3", 00:23:21.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:21.151 "is_configured": true, 00:23:21.151 "data_offset": 2048, 00:23:21.151 "data_size": 63488 00:23:21.151 }, 00:23:21.151 { 00:23:21.151 "name": null, 00:23:21.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:21.151 "is_configured": false, 00:23:21.151 "data_offset": 2048, 00:23:21.151 "data_size": 63488 00:23:21.151 } 00:23:21.151 ] 00:23:21.151 }' 00:23:21.151 19:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.151 19:07:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.720 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:21.720 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:21.979 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:23:21.979 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:22.238 [2024-06-10 19:07:36.742385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:22.238 [2024-06-10 19:07:36.742431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.238 [2024-06-10 19:07:36.742450] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x886f20 00:23:22.238 [2024-06-10 19:07:36.742461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.238 [2024-06-10 19:07:36.742789] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.238 [2024-06-10 19:07:36.742807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:22.238 [2024-06-10 19:07:36.742865] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:22.238 [2024-06-10 19:07:36.742882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:22.238 [2024-06-10 19:07:36.742982] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa22440 00:23:22.238 [2024-06-10 19:07:36.742991] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:22.238 [2024-06-10 19:07:36.743145] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x886560 00:23:22.238 [2024-06-10 19:07:36.743260] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa22440 00:23:22.238 [2024-06-10 19:07:36.743269] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa22440 00:23:22.238 [2024-06-10 19:07:36.743355] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.238 pt4 00:23:22.238 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:22.238 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.238 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.238 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.238 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.238 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.239 "name": "raid_bdev1", 00:23:22.239 "uuid": "a5909719-add5-4fb0-9ceb-afb856053f44", 00:23:22.239 "strip_size_kb": 0, 00:23:22.239 "state": "online", 00:23:22.239 "raid_level": "raid1", 00:23:22.239 "superblock": true, 00:23:22.239 "num_base_bdevs": 4, 00:23:22.239 "num_base_bdevs_discovered": 3, 00:23:22.239 "num_base_bdevs_operational": 3, 00:23:22.239 "base_bdevs_list": [ 00:23:22.239 { 00:23:22.239 "name": null, 00:23:22.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.239 "is_configured": false, 00:23:22.239 "data_offset": 2048, 00:23:22.239 "data_size": 63488 00:23:22.239 }, 00:23:22.239 { 00:23:22.239 "name": "pt2", 00:23:22.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:22.239 "is_configured": true, 00:23:22.239 "data_offset": 2048, 00:23:22.239 "data_size": 63488 00:23:22.239 }, 00:23:22.239 { 00:23:22.239 "name": "pt3", 00:23:22.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:22.239 "is_configured": true, 00:23:22.239 "data_offset": 2048, 00:23:22.239 "data_size": 63488 00:23:22.239 }, 00:23:22.239 { 00:23:22.239 "name": "pt4", 00:23:22.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:22.239 "is_configured": true, 00:23:22.239 "data_offset": 2048, 00:23:22.239 "data_size": 63488 00:23:22.239 } 00:23:22.239 ] 00:23:22.239 }' 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.239 19:07:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.807 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:22.808 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:23.067 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:23.067 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:23.067 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:23.326 [2024-06-10 19:07:37.937760] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' a5909719-add5-4fb0-9ceb-afb856053f44 '!=' a5909719-add5-4fb0-9ceb-afb856053f44 ']' 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1737818 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 1737818 ']' 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 1737818 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:23.326 19:07:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1737818 00:23:23.326 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:23.326 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:23.326 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1737818' 00:23:23.326 killing process with pid 1737818 00:23:23.326 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 1737818 00:23:23.326 [2024-06-10 19:07:38.015823] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.326 [2024-06-10 19:07:38.015870] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.326 [2024-06-10 19:07:38.015931] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.326 [2024-06-10 19:07:38.015942] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa22440 name raid_bdev1, state offline 00:23:23.326 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 1737818 00:23:23.326 [2024-06-10 19:07:38.046909] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.585 19:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:23.585 00:23:23.585 real 0m23.736s 00:23:23.585 user 0m43.284s 00:23:23.585 sys 0m4.394s 00:23:23.585 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:23.585 19:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.585 ************************************ 00:23:23.585 END TEST raid_superblock_test 00:23:23.585 ************************************ 00:23:23.585 19:07:38 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:23:23.585 19:07:38 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:23:23.585 19:07:38 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:23.585 19:07:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.585 ************************************ 00:23:23.585 START TEST raid_read_error_test 00:23:23.585 ************************************ 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 4 read 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:23.585 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6ebvSu3DcK 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1742347 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1742347 /var/tmp/spdk-raid.sock 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 1742347 ']' 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:23.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:23.845 19:07:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.845 [2024-06-10 19:07:38.398477] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:23:23.845 [2024-06-10 19:07:38.398533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742347 ] 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.0 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.1 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.2 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.3 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.4 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.5 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.6 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:01.7 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.0 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.1 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.2 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.3 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.4 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.5 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.6 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b6:02.7 cannot be used 00:23:23.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.845 EAL: Requested device 0000:b8:01.0 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.1 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.2 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.3 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.4 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.5 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.6 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:01.7 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.0 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.1 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.2 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.3 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.4 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.5 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.6 cannot be used 00:23:23.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.846 EAL: Requested device 0000:b8:02.7 cannot be used 00:23:23.846 [2024-06-10 19:07:38.531876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.105 [2024-06-10 19:07:38.614822] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.105 [2024-06-10 19:07:38.675182] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.105 [2024-06-10 19:07:38.675219] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.674 19:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:24.674 19:07:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:23:24.674 19:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:24.674 19:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:24.934 BaseBdev1_malloc 00:23:24.934 19:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:25.194 true 00:23:25.194 19:07:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:25.453 [2024-06-10 19:07:39.981544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:25.453 [2024-06-10 19:07:39.981589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.453 [2024-06-10 19:07:39.981607] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2624d50 00:23:25.453 [2024-06-10 19:07:39.981620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.453 [2024-06-10 19:07:39.983144] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.453 [2024-06-10 19:07:39.983171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:25.453 BaseBdev1 00:23:25.453 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:25.453 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:25.712 BaseBdev2_malloc 00:23:25.712 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:26.007 true 00:23:26.007 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:26.007 [2024-06-10 19:07:40.699693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:26.007 [2024-06-10 19:07:40.699733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.007 [2024-06-10 19:07:40.699750] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x262a2e0 00:23:26.007 [2024-06-10 19:07:40.699762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.007 [2024-06-10 19:07:40.701164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.007 [2024-06-10 19:07:40.701191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.007 BaseBdev2 00:23:26.007 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:26.007 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:26.293 BaseBdev3_malloc 00:23:26.293 19:07:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:26.553 true 00:23:26.553 19:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:26.813 [2024-06-10 19:07:41.369809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:26.813 [2024-06-10 19:07:41.369850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.813 [2024-06-10 19:07:41.369868] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x262bfd0 00:23:26.813 [2024-06-10 19:07:41.369880] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.813 [2024-06-10 19:07:41.371271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.813 [2024-06-10 19:07:41.371297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:26.813 BaseBdev3 00:23:26.813 19:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:26.813 19:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:27.072 BaseBdev4_malloc 00:23:27.072 19:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:27.072 true 00:23:27.072 19:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:27.332 [2024-06-10 19:07:42.019741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:27.332 [2024-06-10 19:07:42.019778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.332 [2024-06-10 19:07:42.019797] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x262d830 00:23:27.332 [2024-06-10 19:07:42.019809] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.332 [2024-06-10 19:07:42.021180] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.332 [2024-06-10 19:07:42.021205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:27.332 BaseBdev4 00:23:27.332 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:27.591 [2024-06-10 19:07:42.244356] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.591 [2024-06-10 19:07:42.245523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.591 [2024-06-10 19:07:42.245597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:27.591 [2024-06-10 19:07:42.245654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:27.591 [2024-06-10 19:07:42.245869] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x262e3a0 00:23:27.591 [2024-06-10 19:07:42.245880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:27.591 [2024-06-10 19:07:42.246056] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x262fac0 00:23:27.591 [2024-06-10 19:07:42.246197] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x262e3a0 00:23:27.591 [2024-06-10 19:07:42.246207] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x262e3a0 00:23:27.591 [2024-06-10 19:07:42.246299] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.591 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.850 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.850 "name": "raid_bdev1", 00:23:27.850 "uuid": "0f089119-affa-43c2-ab7d-55d9cbac2e2b", 00:23:27.850 "strip_size_kb": 0, 00:23:27.850 "state": "online", 00:23:27.850 "raid_level": "raid1", 00:23:27.850 "superblock": true, 00:23:27.850 "num_base_bdevs": 4, 00:23:27.850 "num_base_bdevs_discovered": 4, 00:23:27.850 "num_base_bdevs_operational": 4, 00:23:27.850 "base_bdevs_list": [ 00:23:27.850 { 00:23:27.850 "name": "BaseBdev1", 00:23:27.850 "uuid": "1d91cf58-c139-5430-a218-17bed4f14e61", 00:23:27.850 "is_configured": true, 00:23:27.850 "data_offset": 2048, 00:23:27.850 "data_size": 63488 00:23:27.850 }, 00:23:27.850 { 00:23:27.850 "name": "BaseBdev2", 00:23:27.850 "uuid": "e2414896-7624-5d96-8220-64e53a2dd01c", 00:23:27.850 "is_configured": true, 00:23:27.850 "data_offset": 2048, 00:23:27.850 "data_size": 63488 00:23:27.850 }, 00:23:27.850 { 00:23:27.850 "name": "BaseBdev3", 00:23:27.850 "uuid": "26aa34be-c88b-5445-9046-ea4e8067ca9c", 00:23:27.850 "is_configured": true, 00:23:27.850 "data_offset": 2048, 00:23:27.850 "data_size": 63488 00:23:27.850 }, 00:23:27.850 { 00:23:27.850 "name": "BaseBdev4", 00:23:27.850 "uuid": "e690ed03-9ba3-5ad5-90ec-1fa5addfcbe2", 00:23:27.850 "is_configured": true, 00:23:27.850 "data_offset": 2048, 00:23:27.850 "data_size": 63488 00:23:27.850 } 00:23:27.850 ] 00:23:27.850 }' 00:23:27.850 19:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.850 19:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.419 19:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:28.419 19:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:28.419 [2024-06-10 19:07:43.158967] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x262fa00 00:23:29.359 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.619 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.878 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.878 "name": "raid_bdev1", 00:23:29.878 "uuid": "0f089119-affa-43c2-ab7d-55d9cbac2e2b", 00:23:29.878 "strip_size_kb": 0, 00:23:29.878 "state": "online", 00:23:29.878 "raid_level": "raid1", 00:23:29.878 "superblock": true, 00:23:29.878 "num_base_bdevs": 4, 00:23:29.878 "num_base_bdevs_discovered": 4, 00:23:29.878 "num_base_bdevs_operational": 4, 00:23:29.878 "base_bdevs_list": [ 00:23:29.878 { 00:23:29.878 "name": "BaseBdev1", 00:23:29.878 "uuid": "1d91cf58-c139-5430-a218-17bed4f14e61", 00:23:29.878 "is_configured": true, 00:23:29.878 "data_offset": 2048, 00:23:29.878 "data_size": 63488 00:23:29.878 }, 00:23:29.878 { 00:23:29.878 "name": "BaseBdev2", 00:23:29.878 "uuid": "e2414896-7624-5d96-8220-64e53a2dd01c", 00:23:29.878 "is_configured": true, 00:23:29.878 "data_offset": 2048, 00:23:29.878 "data_size": 63488 00:23:29.878 }, 00:23:29.878 { 00:23:29.878 "name": "BaseBdev3", 00:23:29.878 "uuid": "26aa34be-c88b-5445-9046-ea4e8067ca9c", 00:23:29.878 "is_configured": true, 00:23:29.878 "data_offset": 2048, 00:23:29.878 "data_size": 63488 00:23:29.878 }, 00:23:29.878 { 00:23:29.878 "name": "BaseBdev4", 00:23:29.879 "uuid": "e690ed03-9ba3-5ad5-90ec-1fa5addfcbe2", 00:23:29.879 "is_configured": true, 00:23:29.879 "data_offset": 2048, 00:23:29.879 "data_size": 63488 00:23:29.879 } 00:23:29.879 ] 00:23:29.879 }' 00:23:29.879 19:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.879 19:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.446 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.705 [2024-06-10 19:07:45.291352] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.705 [2024-06-10 19:07:45.291379] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.705 [2024-06-10 19:07:45.294296] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.705 [2024-06-10 19:07:45.294331] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.705 [2024-06-10 19:07:45.294439] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.705 [2024-06-10 19:07:45.294450] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x262e3a0 name raid_bdev1, state offline 00:23:30.705 0 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1742347 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 1742347 ']' 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 1742347 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1742347 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1742347' 00:23:30.705 killing process with pid 1742347 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 1742347 00:23:30.705 [2024-06-10 19:07:45.363294] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:30.705 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 1742347 00:23:30.705 [2024-06-10 19:07:45.390463] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6ebvSu3DcK 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:30.965 00:23:30.965 real 0m7.274s 00:23:30.965 user 0m11.599s 00:23:30.965 sys 0m1.271s 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:30.965 19:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.965 ************************************ 00:23:30.965 END TEST raid_read_error_test 00:23:30.965 ************************************ 00:23:30.965 19:07:45 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:23:30.965 19:07:45 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:23:30.965 19:07:45 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:30.965 19:07:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.965 ************************************ 00:23:30.965 START TEST raid_write_error_test 00:23:30.965 ************************************ 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 4 write 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Z1h8GbEAln 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1743727 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1743727 /var/tmp/spdk-raid.sock 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 1743727 ']' 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:30.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:30.965 19:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.225 [2024-06-10 19:07:45.752558] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:23:31.225 [2024-06-10 19:07:45.752634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743727 ] 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.0 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.1 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.2 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.3 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.4 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.5 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.6 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:01.7 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.0 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.1 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.2 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.3 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.4 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.5 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.6 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b6:02.7 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.0 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.1 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.2 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.3 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.4 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.5 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.6 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:01.7 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.0 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.1 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.2 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.3 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.4 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.5 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.6 cannot be used 00:23:31.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:31.225 EAL: Requested device 0000:b8:02.7 cannot be used 00:23:31.225 [2024-06-10 19:07:45.887045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.225 [2024-06-10 19:07:45.973042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.485 [2024-06-10 19:07:46.041059] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:31.485 [2024-06-10 19:07:46.041098] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:32.053 19:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:32.053 19:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:23:32.053 19:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:32.053 19:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:32.312 BaseBdev1_malloc 00:23:32.312 19:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:32.312 true 00:23:32.571 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:32.571 [2024-06-10 19:07:47.286131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:32.571 [2024-06-10 19:07:47.286172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.571 [2024-06-10 19:07:47.286190] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2351d50 00:23:32.571 [2024-06-10 19:07:47.286202] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.571 [2024-06-10 19:07:47.287785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.571 [2024-06-10 19:07:47.287811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:32.572 BaseBdev1 00:23:32.572 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:32.572 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:32.831 BaseBdev2_malloc 00:23:32.831 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:33.090 true 00:23:33.090 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:33.348 [2024-06-10 19:07:47.956095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:33.348 [2024-06-10 19:07:47.956133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.348 [2024-06-10 19:07:47.956149] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23572e0 00:23:33.348 [2024-06-10 19:07:47.956161] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.348 [2024-06-10 19:07:47.957538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.348 [2024-06-10 19:07:47.957564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:33.348 BaseBdev2 00:23:33.348 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:33.348 19:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:33.607 BaseBdev3_malloc 00:23:33.607 19:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:33.866 true 00:23:33.866 19:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:33.866 [2024-06-10 19:07:48.614091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:33.866 [2024-06-10 19:07:48.614128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.866 [2024-06-10 19:07:48.614146] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2358fd0 00:23:33.866 [2024-06-10 19:07:48.614158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.866 [2024-06-10 19:07:48.615479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.867 [2024-06-10 19:07:48.615506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:33.867 BaseBdev3 00:23:34.126 19:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:34.126 19:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:34.126 BaseBdev4_malloc 00:23:34.126 19:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:34.385 true 00:23:34.385 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:34.644 [2024-06-10 19:07:49.268138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:34.644 [2024-06-10 19:07:49.268176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.644 [2024-06-10 19:07:49.268194] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x235a830 00:23:34.644 [2024-06-10 19:07:49.268206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.644 [2024-06-10 19:07:49.269572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.644 [2024-06-10 19:07:49.269604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:34.644 BaseBdev4 00:23:34.644 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:34.902 [2024-06-10 19:07:49.492750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:34.902 [2024-06-10 19:07:49.493915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.902 [2024-06-10 19:07:49.493977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.902 [2024-06-10 19:07:49.494034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:34.902 [2024-06-10 19:07:49.494248] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x235b3a0 00:23:34.902 [2024-06-10 19:07:49.494258] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:34.902 [2024-06-10 19:07:49.494434] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x235cac0 00:23:34.902 [2024-06-10 19:07:49.494584] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x235b3a0 00:23:34.902 [2024-06-10 19:07:49.494594] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x235b3a0 00:23:34.902 [2024-06-10 19:07:49.494687] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.902 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.161 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.161 "name": "raid_bdev1", 00:23:35.161 "uuid": "9bdba65d-90b9-49d7-bd31-b1d7a1ffecda", 00:23:35.161 "strip_size_kb": 0, 00:23:35.161 "state": "online", 00:23:35.161 "raid_level": "raid1", 00:23:35.161 "superblock": true, 00:23:35.161 "num_base_bdevs": 4, 00:23:35.161 "num_base_bdevs_discovered": 4, 00:23:35.161 "num_base_bdevs_operational": 4, 00:23:35.161 "base_bdevs_list": [ 00:23:35.161 { 00:23:35.161 "name": "BaseBdev1", 00:23:35.161 "uuid": "fce90cf6-9f41-54e7-a323-6a2ec8ab8118", 00:23:35.161 "is_configured": true, 00:23:35.161 "data_offset": 2048, 00:23:35.161 "data_size": 63488 00:23:35.161 }, 00:23:35.161 { 00:23:35.161 "name": "BaseBdev2", 00:23:35.161 "uuid": "f4e008c1-85a3-511c-9bd5-095b3a32947a", 00:23:35.161 "is_configured": true, 00:23:35.161 "data_offset": 2048, 00:23:35.161 "data_size": 63488 00:23:35.161 }, 00:23:35.161 { 00:23:35.161 "name": "BaseBdev3", 00:23:35.161 "uuid": "46e506c6-45d2-5532-8088-a362652a40bf", 00:23:35.161 "is_configured": true, 00:23:35.161 "data_offset": 2048, 00:23:35.161 "data_size": 63488 00:23:35.161 }, 00:23:35.161 { 00:23:35.161 "name": "BaseBdev4", 00:23:35.161 "uuid": "acf54a21-b12a-5bb8-9b38-0939252a497f", 00:23:35.161 "is_configured": true, 00:23:35.161 "data_offset": 2048, 00:23:35.161 "data_size": 63488 00:23:35.161 } 00:23:35.161 ] 00:23:35.161 }' 00:23:35.161 19:07:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.161 19:07:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.729 19:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:35.729 19:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:35.729 [2024-06-10 19:07:50.411382] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x235ca00 00:23:36.668 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:36.927 [2024-06-10 19:07:51.526048] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:36.927 [2024-06-10 19:07:51.526097] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:36.927 [2024-06-10 19:07:51.526297] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x235ca00 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.927 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.186 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.186 "name": "raid_bdev1", 00:23:37.186 "uuid": "9bdba65d-90b9-49d7-bd31-b1d7a1ffecda", 00:23:37.186 "strip_size_kb": 0, 00:23:37.186 "state": "online", 00:23:37.186 "raid_level": "raid1", 00:23:37.186 "superblock": true, 00:23:37.186 "num_base_bdevs": 4, 00:23:37.186 "num_base_bdevs_discovered": 3, 00:23:37.186 "num_base_bdevs_operational": 3, 00:23:37.186 "base_bdevs_list": [ 00:23:37.186 { 00:23:37.186 "name": null, 00:23:37.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.186 "is_configured": false, 00:23:37.186 "data_offset": 2048, 00:23:37.186 "data_size": 63488 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "name": "BaseBdev2", 00:23:37.186 "uuid": "f4e008c1-85a3-511c-9bd5-095b3a32947a", 00:23:37.186 "is_configured": true, 00:23:37.186 "data_offset": 2048, 00:23:37.186 "data_size": 63488 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "name": "BaseBdev3", 00:23:37.186 "uuid": "46e506c6-45d2-5532-8088-a362652a40bf", 00:23:37.186 "is_configured": true, 00:23:37.186 "data_offset": 2048, 00:23:37.186 "data_size": 63488 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "name": "BaseBdev4", 00:23:37.186 "uuid": "acf54a21-b12a-5bb8-9b38-0939252a497f", 00:23:37.186 "is_configured": true, 00:23:37.186 "data_offset": 2048, 00:23:37.186 "data_size": 63488 00:23:37.186 } 00:23:37.186 ] 00:23:37.186 }' 00:23:37.186 19:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.186 19:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.753 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:38.012 [2024-06-10 19:07:52.575540] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:38.012 [2024-06-10 19:07:52.575587] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:38.012 [2024-06-10 19:07:52.578457] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:38.012 [2024-06-10 19:07:52.578489] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.012 [2024-06-10 19:07:52.578582] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:38.012 [2024-06-10 19:07:52.578593] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x235b3a0 name raid_bdev1, state offline 00:23:38.012 0 00:23:38.012 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1743727 00:23:38.012 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 1743727 ']' 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 1743727 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1743727 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1743727' 00:23:38.013 killing process with pid 1743727 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 1743727 00:23:38.013 [2024-06-10 19:07:52.657558] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:38.013 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 1743727 00:23:38.013 [2024-06-10 19:07:52.684627] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Z1h8GbEAln 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:38.272 00:23:38.272 real 0m7.219s 00:23:38.272 user 0m11.462s 00:23:38.272 sys 0m1.282s 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:38.272 19:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.272 ************************************ 00:23:38.272 END TEST raid_write_error_test 00:23:38.272 ************************************ 00:23:38.272 19:07:52 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:23:38.272 19:07:52 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:23:38.272 19:07:52 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:23:38.272 19:07:52 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:23:38.272 19:07:52 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:38.272 19:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:38.272 ************************************ 00:23:38.272 START TEST raid_rebuild_test 00:23:38.272 ************************************ 00:23:38.272 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 false false true 00:23:38.272 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:38.272 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:38.272 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=1744960 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 1744960 /var/tmp/spdk-raid.sock 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@830 -- # '[' -z 1744960 ']' 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:38.273 19:07:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.532 [2024-06-10 19:07:53.049650] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:23:38.533 [2024-06-10 19:07:53.049709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744960 ] 00:23:38.533 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:38.533 Zero copy mechanism will not be used. 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.0 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.1 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.2 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.3 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.4 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.5 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.6 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:01.7 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.0 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.1 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.2 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.3 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.4 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.5 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.6 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b6:02.7 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.0 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.1 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.2 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.3 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.4 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.5 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.6 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:01.7 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.0 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.1 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.2 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.3 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.4 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.5 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.6 cannot be used 00:23:38.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:38.533 EAL: Requested device 0000:b8:02.7 cannot be used 00:23:38.533 [2024-06-10 19:07:53.182542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.533 [2024-06-10 19:07:53.265362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.792 [2024-06-10 19:07:53.323521] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.793 [2024-06-10 19:07:53.323558] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.362 19:07:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:39.362 19:07:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@863 -- # return 0 00:23:39.362 19:07:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:39.362 19:07:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:39.621 BaseBdev1_malloc 00:23:39.621 19:07:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:39.881 [2024-06-10 19:07:54.388159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:39.881 [2024-06-10 19:07:54.388200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.881 [2024-06-10 19:07:54.388220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14bf200 00:23:39.881 [2024-06-10 19:07:54.388231] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.881 [2024-06-10 19:07:54.389727] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.881 [2024-06-10 19:07:54.389753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.881 BaseBdev1 00:23:39.881 19:07:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:39.881 19:07:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:39.881 BaseBdev2_malloc 00:23:39.881 19:07:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:40.140 [2024-06-10 19:07:54.845885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:40.140 [2024-06-10 19:07:54.845920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.140 [2024-06-10 19:07:54.845938] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1656d90 00:23:40.140 [2024-06-10 19:07:54.845949] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.140 [2024-06-10 19:07:54.847273] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.140 [2024-06-10 19:07:54.847300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:40.140 BaseBdev2 00:23:40.140 19:07:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:40.400 spare_malloc 00:23:40.400 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:40.659 spare_delay 00:23:40.659 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:40.918 [2024-06-10 19:07:55.540013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:40.918 [2024-06-10 19:07:55.540052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.918 [2024-06-10 19:07:55.540071] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b7b80 00:23:40.918 [2024-06-10 19:07:55.540083] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.918 [2024-06-10 19:07:55.541468] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.918 [2024-06-10 19:07:55.541493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:40.918 spare 00:23:40.918 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:41.178 [2024-06-10 19:07:55.764616] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:41.178 [2024-06-10 19:07:55.765762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.178 [2024-06-10 19:07:55.765828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b8d90 00:23:41.178 [2024-06-10 19:07:55.765839] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:41.178 [2024-06-10 19:07:55.766021] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1659ee0 00:23:41.178 [2024-06-10 19:07:55.766149] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b8d90 00:23:41.178 [2024-06-10 19:07:55.766159] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14b8d90 00:23:41.178 [2024-06-10 19:07:55.766259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.178 19:07:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.437 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.437 "name": "raid_bdev1", 00:23:41.437 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:41.437 "strip_size_kb": 0, 00:23:41.437 "state": "online", 00:23:41.437 "raid_level": "raid1", 00:23:41.437 "superblock": false, 00:23:41.437 "num_base_bdevs": 2, 00:23:41.437 "num_base_bdevs_discovered": 2, 00:23:41.437 "num_base_bdevs_operational": 2, 00:23:41.437 "base_bdevs_list": [ 00:23:41.437 { 00:23:41.437 "name": "BaseBdev1", 00:23:41.437 "uuid": "0a363a3a-1948-5c3c-be01-df5f46c4d704", 00:23:41.437 "is_configured": true, 00:23:41.437 "data_offset": 0, 00:23:41.437 "data_size": 65536 00:23:41.437 }, 00:23:41.437 { 00:23:41.437 "name": "BaseBdev2", 00:23:41.437 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:41.437 "is_configured": true, 00:23:41.437 "data_offset": 0, 00:23:41.437 "data_size": 65536 00:23:41.437 } 00:23:41.437 ] 00:23:41.437 }' 00:23:41.437 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.437 19:07:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.006 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:42.006 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:42.266 [2024-06-10 19:07:56.767418] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:42.266 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:23:42.266 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.266 19:07:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:42.266 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:42.525 [2024-06-10 19:07:57.224479] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1659ee0 00:23:42.525 /dev/nbd0 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:23:42.525 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:42.526 1+0 records in 00:23:42.526 1+0 records out 00:23:42.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276556 s, 14.8 MB/s 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:42.526 19:07:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:47.802 65536+0 records in 00:23:47.802 65536+0 records out 00:23:47.802 33554432 bytes (34 MB, 32 MiB) copied, 4.95292 s, 6.8 MB/s 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:47.802 [2024-06-10 19:08:02.483063] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.802 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:48.062 [2024-06-10 19:08:02.703344] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.062 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.321 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.321 "name": "raid_bdev1", 00:23:48.321 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:48.321 "strip_size_kb": 0, 00:23:48.321 "state": "online", 00:23:48.321 "raid_level": "raid1", 00:23:48.321 "superblock": false, 00:23:48.321 "num_base_bdevs": 2, 00:23:48.321 "num_base_bdevs_discovered": 1, 00:23:48.321 "num_base_bdevs_operational": 1, 00:23:48.321 "base_bdevs_list": [ 00:23:48.321 { 00:23:48.321 "name": null, 00:23:48.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.321 "is_configured": false, 00:23:48.321 "data_offset": 0, 00:23:48.321 "data_size": 65536 00:23:48.321 }, 00:23:48.321 { 00:23:48.321 "name": "BaseBdev2", 00:23:48.321 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:48.321 "is_configured": true, 00:23:48.321 "data_offset": 0, 00:23:48.321 "data_size": 65536 00:23:48.321 } 00:23:48.321 ] 00:23:48.321 }' 00:23:48.321 19:08:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.321 19:08:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.889 19:08:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:49.149 [2024-06-10 19:08:03.750109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:49.149 [2024-06-10 19:08:03.754891] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14baaf0 00:23:49.149 [2024-06-10 19:08:03.756940] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.149 19:08:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.086 19:08:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.345 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:50.346 "name": "raid_bdev1", 00:23:50.346 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:50.346 "strip_size_kb": 0, 00:23:50.346 "state": "online", 00:23:50.346 "raid_level": "raid1", 00:23:50.346 "superblock": false, 00:23:50.346 "num_base_bdevs": 2, 00:23:50.346 "num_base_bdevs_discovered": 2, 00:23:50.346 "num_base_bdevs_operational": 2, 00:23:50.346 "process": { 00:23:50.346 "type": "rebuild", 00:23:50.346 "target": "spare", 00:23:50.346 "progress": { 00:23:50.346 "blocks": 24576, 00:23:50.346 "percent": 37 00:23:50.346 } 00:23:50.346 }, 00:23:50.346 "base_bdevs_list": [ 00:23:50.346 { 00:23:50.346 "name": "spare", 00:23:50.346 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:50.346 "is_configured": true, 00:23:50.346 "data_offset": 0, 00:23:50.346 "data_size": 65536 00:23:50.346 }, 00:23:50.346 { 00:23:50.346 "name": "BaseBdev2", 00:23:50.346 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:50.346 "is_configured": true, 00:23:50.346 "data_offset": 0, 00:23:50.346 "data_size": 65536 00:23:50.346 } 00:23:50.346 ] 00:23:50.346 }' 00:23:50.346 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:50.346 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.346 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:50.605 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.605 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:50.605 [2024-06-10 19:08:05.307138] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.924 [2024-06-10 19:08:05.368556] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:50.924 [2024-06-10 19:08:05.368603] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.924 [2024-06-10 19:08:05.368617] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.924 [2024-06-10 19:08:05.368625] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:50.924 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:50.925 "name": "raid_bdev1", 00:23:50.925 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:50.925 "strip_size_kb": 0, 00:23:50.925 "state": "online", 00:23:50.925 "raid_level": "raid1", 00:23:50.925 "superblock": false, 00:23:50.925 "num_base_bdevs": 2, 00:23:50.925 "num_base_bdevs_discovered": 1, 00:23:50.925 "num_base_bdevs_operational": 1, 00:23:50.925 "base_bdevs_list": [ 00:23:50.925 { 00:23:50.925 "name": null, 00:23:50.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.925 "is_configured": false, 00:23:50.925 "data_offset": 0, 00:23:50.925 "data_size": 65536 00:23:50.925 }, 00:23:50.925 { 00:23:50.925 "name": "BaseBdev2", 00:23:50.925 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:50.925 "is_configured": true, 00:23:50.925 "data_offset": 0, 00:23:50.925 "data_size": 65536 00:23:50.925 } 00:23:50.925 ] 00:23:50.925 }' 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:50.925 19:08:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.493 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.752 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.752 "name": "raid_bdev1", 00:23:51.752 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:51.752 "strip_size_kb": 0, 00:23:51.752 "state": "online", 00:23:51.752 "raid_level": "raid1", 00:23:51.752 "superblock": false, 00:23:51.752 "num_base_bdevs": 2, 00:23:51.752 "num_base_bdevs_discovered": 1, 00:23:51.752 "num_base_bdevs_operational": 1, 00:23:51.752 "base_bdevs_list": [ 00:23:51.752 { 00:23:51.752 "name": null, 00:23:51.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.752 "is_configured": false, 00:23:51.752 "data_offset": 0, 00:23:51.752 "data_size": 65536 00:23:51.752 }, 00:23:51.752 { 00:23:51.752 "name": "BaseBdev2", 00:23:51.752 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:51.752 "is_configured": true, 00:23:51.752 "data_offset": 0, 00:23:51.752 "data_size": 65536 00:23:51.752 } 00:23:51.752 ] 00:23:51.752 }' 00:23:51.752 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:51.752 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:51.752 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:52.012 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:52.012 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:52.012 [2024-06-10 19:08:06.720335] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:52.012 [2024-06-10 19:08:06.725024] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14baaf0 00:23:52.012 [2024-06-10 19:08:06.726382] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:52.012 19:08:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.391 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:53.392 "name": "raid_bdev1", 00:23:53.392 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:53.392 "strip_size_kb": 0, 00:23:53.392 "state": "online", 00:23:53.392 "raid_level": "raid1", 00:23:53.392 "superblock": false, 00:23:53.392 "num_base_bdevs": 2, 00:23:53.392 "num_base_bdevs_discovered": 2, 00:23:53.392 "num_base_bdevs_operational": 2, 00:23:53.392 "process": { 00:23:53.392 "type": "rebuild", 00:23:53.392 "target": "spare", 00:23:53.392 "progress": { 00:23:53.392 "blocks": 24576, 00:23:53.392 "percent": 37 00:23:53.392 } 00:23:53.392 }, 00:23:53.392 "base_bdevs_list": [ 00:23:53.392 { 00:23:53.392 "name": "spare", 00:23:53.392 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:53.392 "is_configured": true, 00:23:53.392 "data_offset": 0, 00:23:53.392 "data_size": 65536 00:23:53.392 }, 00:23:53.392 { 00:23:53.392 "name": "BaseBdev2", 00:23:53.392 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:53.392 "is_configured": true, 00:23:53.392 "data_offset": 0, 00:23:53.392 "data_size": 65536 00:23:53.392 } 00:23:53.392 ] 00:23:53.392 }' 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.392 19:08:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=713 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.392 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.651 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:53.651 "name": "raid_bdev1", 00:23:53.651 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:53.651 "strip_size_kb": 0, 00:23:53.651 "state": "online", 00:23:53.651 "raid_level": "raid1", 00:23:53.651 "superblock": false, 00:23:53.651 "num_base_bdevs": 2, 00:23:53.651 "num_base_bdevs_discovered": 2, 00:23:53.651 "num_base_bdevs_operational": 2, 00:23:53.651 "process": { 00:23:53.651 "type": "rebuild", 00:23:53.651 "target": "spare", 00:23:53.651 "progress": { 00:23:53.651 "blocks": 28672, 00:23:53.651 "percent": 43 00:23:53.651 } 00:23:53.651 }, 00:23:53.651 "base_bdevs_list": [ 00:23:53.651 { 00:23:53.651 "name": "spare", 00:23:53.651 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:53.651 "is_configured": true, 00:23:53.651 "data_offset": 0, 00:23:53.651 "data_size": 65536 00:23:53.651 }, 00:23:53.651 { 00:23:53.651 "name": "BaseBdev2", 00:23:53.651 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:53.651 "is_configured": true, 00:23:53.651 "data_offset": 0, 00:23:53.651 "data_size": 65536 00:23:53.651 } 00:23:53.651 ] 00:23:53.651 }' 00:23:53.651 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:53.651 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.651 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:53.651 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.651 19:08:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.588 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.847 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.847 "name": "raid_bdev1", 00:23:54.847 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:54.847 "strip_size_kb": 0, 00:23:54.847 "state": "online", 00:23:54.847 "raid_level": "raid1", 00:23:54.847 "superblock": false, 00:23:54.847 "num_base_bdevs": 2, 00:23:54.847 "num_base_bdevs_discovered": 2, 00:23:54.847 "num_base_bdevs_operational": 2, 00:23:54.847 "process": { 00:23:54.847 "type": "rebuild", 00:23:54.847 "target": "spare", 00:23:54.847 "progress": { 00:23:54.847 "blocks": 55296, 00:23:54.847 "percent": 84 00:23:54.847 } 00:23:54.847 }, 00:23:54.847 "base_bdevs_list": [ 00:23:54.847 { 00:23:54.847 "name": "spare", 00:23:54.847 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:54.847 "is_configured": true, 00:23:54.847 "data_offset": 0, 00:23:54.847 "data_size": 65536 00:23:54.847 }, 00:23:54.847 { 00:23:54.847 "name": "BaseBdev2", 00:23:54.847 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:54.847 "is_configured": true, 00:23:54.847 "data_offset": 0, 00:23:54.847 "data_size": 65536 00:23:54.847 } 00:23:54.847 ] 00:23:54.847 }' 00:23:54.847 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:54.847 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.847 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:55.107 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.107 19:08:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:55.365 [2024-06-10 19:08:09.949515] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:55.365 [2024-06-10 19:08:09.949565] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:55.365 [2024-06-10 19:08:09.949604] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.933 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.192 "name": "raid_bdev1", 00:23:56.192 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:56.192 "strip_size_kb": 0, 00:23:56.192 "state": "online", 00:23:56.192 "raid_level": "raid1", 00:23:56.192 "superblock": false, 00:23:56.192 "num_base_bdevs": 2, 00:23:56.192 "num_base_bdevs_discovered": 2, 00:23:56.192 "num_base_bdevs_operational": 2, 00:23:56.192 "base_bdevs_list": [ 00:23:56.192 { 00:23:56.192 "name": "spare", 00:23:56.192 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:56.192 "is_configured": true, 00:23:56.192 "data_offset": 0, 00:23:56.192 "data_size": 65536 00:23:56.192 }, 00:23:56.192 { 00:23:56.192 "name": "BaseBdev2", 00:23:56.192 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:56.192 "is_configured": true, 00:23:56.192 "data_offset": 0, 00:23:56.192 "data_size": 65536 00:23:56.192 } 00:23:56.192 ] 00:23:56.192 }' 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:56.192 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:56.193 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.193 19:08:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.451 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.451 "name": "raid_bdev1", 00:23:56.451 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:56.451 "strip_size_kb": 0, 00:23:56.451 "state": "online", 00:23:56.451 "raid_level": "raid1", 00:23:56.451 "superblock": false, 00:23:56.451 "num_base_bdevs": 2, 00:23:56.451 "num_base_bdevs_discovered": 2, 00:23:56.451 "num_base_bdevs_operational": 2, 00:23:56.451 "base_bdevs_list": [ 00:23:56.451 { 00:23:56.451 "name": "spare", 00:23:56.451 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:56.451 "is_configured": true, 00:23:56.451 "data_offset": 0, 00:23:56.451 "data_size": 65536 00:23:56.451 }, 00:23:56.451 { 00:23:56.451 "name": "BaseBdev2", 00:23:56.451 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:56.451 "is_configured": true, 00:23:56.451 "data_offset": 0, 00:23:56.451 "data_size": 65536 00:23:56.451 } 00:23:56.451 ] 00:23:56.451 }' 00:23:56.451 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:56.451 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:56.451 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.710 "name": "raid_bdev1", 00:23:56.710 "uuid": "2bc43665-ce50-40c3-ab13-0d0d54512bd0", 00:23:56.710 "strip_size_kb": 0, 00:23:56.710 "state": "online", 00:23:56.710 "raid_level": "raid1", 00:23:56.710 "superblock": false, 00:23:56.710 "num_base_bdevs": 2, 00:23:56.710 "num_base_bdevs_discovered": 2, 00:23:56.710 "num_base_bdevs_operational": 2, 00:23:56.710 "base_bdevs_list": [ 00:23:56.710 { 00:23:56.710 "name": "spare", 00:23:56.710 "uuid": "743bfe6a-0282-5720-bd3f-19689d354162", 00:23:56.710 "is_configured": true, 00:23:56.710 "data_offset": 0, 00:23:56.710 "data_size": 65536 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "name": "BaseBdev2", 00:23:56.710 "uuid": "843ad6e2-d2bf-552f-86dd-097d9cff7317", 00:23:56.710 "is_configured": true, 00:23:56.710 "data_offset": 0, 00:23:56.710 "data_size": 65536 00:23:56.710 } 00:23:56.710 ] 00:23:56.710 }' 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.710 19:08:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.278 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:57.555 [2024-06-10 19:08:12.223460] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:57.555 [2024-06-10 19:08:12.223483] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:57.555 [2024-06-10 19:08:12.223536] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:57.555 [2024-06-10 19:08:12.223592] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:57.555 [2024-06-10 19:08:12.223604] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b8d90 name raid_bdev1, state offline 00:23:57.555 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.555 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:23:57.815 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:57.815 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:57.815 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:57.816 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:58.075 /dev/nbd0 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:58.075 1+0 records in 00:23:58.075 1+0 records out 00:23:58.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241812 s, 16.9 MB/s 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:58.075 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:58.335 /dev/nbd1 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:58.335 19:08:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:58.335 1+0 records in 00:23:58.335 1+0 records out 00:23:58.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292097 s, 14.0 MB/s 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.335 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.594 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 1744960 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@949 -- # '[' -z 1744960 ']' 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # kill -0 1744960 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # uname 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:58.852 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1744960 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1744960' 00:23:59.112 killing process with pid 1744960 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # kill 1744960 00:23:59.112 Received shutdown signal, test time was about 60.000000 seconds 00:23:59.112 00:23:59.112 Latency(us) 00:23:59.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.112 =================================================================================================================== 00:23:59.112 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:59.112 [2024-06-10 19:08:13.648080] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # wait 1744960 00:23:59.112 [2024-06-10 19:08:13.671368] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:23:59.112 00:23:59.112 real 0m20.878s 00:23:59.112 user 0m27.829s 00:23:59.112 sys 0m4.761s 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:59.112 19:08:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.112 ************************************ 00:23:59.112 END TEST raid_rebuild_test 00:23:59.112 ************************************ 00:23:59.371 19:08:13 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:23:59.371 19:08:13 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:23:59.371 19:08:13 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:59.371 19:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:59.371 ************************************ 00:23:59.371 START TEST raid_rebuild_test_sb 00:23:59.371 ************************************ 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false true 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:59.371 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=1748774 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 1748774 /var/tmp/spdk-raid.sock 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1748774 ']' 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:59.372 19:08:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.372 [2024-06-10 19:08:14.019518] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:23:59.372 [2024-06-10 19:08:14.019598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748774 ] 00:23:59.372 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:59.372 Zero copy mechanism will not be used. 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.0 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.1 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.2 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.3 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.4 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.5 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.6 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:01.7 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.0 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.1 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.2 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.3 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.4 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.5 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.6 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b6:02.7 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.0 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.1 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.2 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.3 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.4 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.5 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.6 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:01.7 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.0 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.1 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.2 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.3 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.4 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.5 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.6 cannot be used 00:23:59.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.372 EAL: Requested device 0000:b8:02.7 cannot be used 00:23:59.631 [2024-06-10 19:08:14.157265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.631 [2024-06-10 19:08:14.240910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.631 [2024-06-10 19:08:14.299067] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:59.631 [2024-06-10 19:08:14.299111] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:00.199 19:08:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:00.199 19:08:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@863 -- # return 0 00:24:00.199 19:08:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:00.199 19:08:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:00.457 BaseBdev1_malloc 00:24:00.457 19:08:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:00.716 [2024-06-10 19:08:15.348081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:00.716 [2024-06-10 19:08:15.348126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.716 [2024-06-10 19:08:15.348146] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2318200 00:24:00.716 [2024-06-10 19:08:15.348157] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.716 [2024-06-10 19:08:15.349608] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.716 [2024-06-10 19:08:15.349634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:00.716 BaseBdev1 00:24:00.716 19:08:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:00.716 19:08:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:00.974 BaseBdev2_malloc 00:24:00.974 19:08:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:01.233 [2024-06-10 19:08:15.809654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:01.233 [2024-06-10 19:08:15.809693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.233 [2024-06-10 19:08:15.809712] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24afd90 00:24:01.233 [2024-06-10 19:08:15.809723] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.233 [2024-06-10 19:08:15.811072] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.233 [2024-06-10 19:08:15.811098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:01.233 BaseBdev2 00:24:01.233 19:08:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:01.493 spare_malloc 00:24:01.493 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:01.752 spare_delay 00:24:01.752 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:01.752 [2024-06-10 19:08:16.495584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:01.752 [2024-06-10 19:08:16.495622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.752 [2024-06-10 19:08:16.495639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2310b80 00:24:01.752 [2024-06-10 19:08:16.495651] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.752 [2024-06-10 19:08:16.496926] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.752 [2024-06-10 19:08:16.496956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:01.752 spare 00:24:02.010 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:02.010 [2024-06-10 19:08:16.724198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:02.010 [2024-06-10 19:08:16.725331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:02.010 [2024-06-10 19:08:16.725481] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2311d90 00:24:02.010 [2024-06-10 19:08:16.725494] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:02.010 [2024-06-10 19:08:16.725664] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b2ee0 00:24:02.010 [2024-06-10 19:08:16.725786] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2311d90 00:24:02.011 [2024-06-10 19:08:16.725796] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2311d90 00:24:02.011 [2024-06-10 19:08:16.725881] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.011 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.269 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.269 "name": "raid_bdev1", 00:24:02.269 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:02.269 "strip_size_kb": 0, 00:24:02.269 "state": "online", 00:24:02.269 "raid_level": "raid1", 00:24:02.269 "superblock": true, 00:24:02.269 "num_base_bdevs": 2, 00:24:02.269 "num_base_bdevs_discovered": 2, 00:24:02.269 "num_base_bdevs_operational": 2, 00:24:02.269 "base_bdevs_list": [ 00:24:02.269 { 00:24:02.269 "name": "BaseBdev1", 00:24:02.269 "uuid": "868b2ca5-5a24-523f-961d-7992e5285f4d", 00:24:02.269 "is_configured": true, 00:24:02.269 "data_offset": 2048, 00:24:02.270 "data_size": 63488 00:24:02.270 }, 00:24:02.270 { 00:24:02.270 "name": "BaseBdev2", 00:24:02.270 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:02.270 "is_configured": true, 00:24:02.270 "data_offset": 2048, 00:24:02.270 "data_size": 63488 00:24:02.270 } 00:24:02.270 ] 00:24:02.270 }' 00:24:02.270 19:08:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.270 19:08:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.838 19:08:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:02.838 19:08:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:03.096 [2024-06-10 19:08:17.751088] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.096 19:08:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:03.096 19:08:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.096 19:08:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:03.355 19:08:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.355 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:03.614 [2024-06-10 19:08:18.151993] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b2ee0 00:24:03.614 /dev/nbd0 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:03.614 1+0 records in 00:24:03.614 1+0 records out 00:24:03.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266131 s, 15.4 MB/s 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:24:03.614 19:08:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:08.884 63488+0 records in 00:24:08.884 63488+0 records out 00:24:08.884 32505856 bytes (33 MB, 31 MiB) copied, 5.14069 s, 6.3 MB/s 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:08.884 [2024-06-10 19:08:23.600533] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:08.884 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:09.143 [2024-06-10 19:08:23.821182] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.143 19:08:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.402 19:08:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.402 "name": "raid_bdev1", 00:24:09.402 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:09.402 "strip_size_kb": 0, 00:24:09.402 "state": "online", 00:24:09.402 "raid_level": "raid1", 00:24:09.402 "superblock": true, 00:24:09.402 "num_base_bdevs": 2, 00:24:09.402 "num_base_bdevs_discovered": 1, 00:24:09.402 "num_base_bdevs_operational": 1, 00:24:09.402 "base_bdevs_list": [ 00:24:09.402 { 00:24:09.402 "name": null, 00:24:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.402 "is_configured": false, 00:24:09.402 "data_offset": 2048, 00:24:09.402 "data_size": 63488 00:24:09.402 }, 00:24:09.402 { 00:24:09.402 "name": "BaseBdev2", 00:24:09.402 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:09.402 "is_configured": true, 00:24:09.402 "data_offset": 2048, 00:24:09.402 "data_size": 63488 00:24:09.402 } 00:24:09.402 ] 00:24:09.402 }' 00:24:09.402 19:08:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.402 19:08:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.969 19:08:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:10.228 [2024-06-10 19:08:24.863934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.228 [2024-06-10 19:08:24.868691] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x230fa10 00:24:10.228 [2024-06-10 19:08:24.870685] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:10.228 19:08:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.164 19:08:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.423 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:11.424 "name": "raid_bdev1", 00:24:11.424 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:11.424 "strip_size_kb": 0, 00:24:11.424 "state": "online", 00:24:11.424 "raid_level": "raid1", 00:24:11.424 "superblock": true, 00:24:11.424 "num_base_bdevs": 2, 00:24:11.424 "num_base_bdevs_discovered": 2, 00:24:11.424 "num_base_bdevs_operational": 2, 00:24:11.424 "process": { 00:24:11.424 "type": "rebuild", 00:24:11.424 "target": "spare", 00:24:11.424 "progress": { 00:24:11.424 "blocks": 24576, 00:24:11.424 "percent": 38 00:24:11.424 } 00:24:11.424 }, 00:24:11.424 "base_bdevs_list": [ 00:24:11.424 { 00:24:11.424 "name": "spare", 00:24:11.424 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:11.424 "is_configured": true, 00:24:11.424 "data_offset": 2048, 00:24:11.424 "data_size": 63488 00:24:11.424 }, 00:24:11.424 { 00:24:11.424 "name": "BaseBdev2", 00:24:11.424 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:11.424 "is_configured": true, 00:24:11.424 "data_offset": 2048, 00:24:11.424 "data_size": 63488 00:24:11.424 } 00:24:11.424 ] 00:24:11.424 }' 00:24:11.424 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:11.424 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.424 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:11.683 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.683 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:11.684 [2024-06-10 19:08:26.417475] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:11.942 [2024-06-10 19:08:26.482503] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:11.942 [2024-06-10 19:08:26.482550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.942 [2024-06-10 19:08:26.482564] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:11.942 [2024-06-10 19:08:26.482571] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.942 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.943 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.943 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.943 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.201 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.201 "name": "raid_bdev1", 00:24:12.201 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:12.201 "strip_size_kb": 0, 00:24:12.201 "state": "online", 00:24:12.201 "raid_level": "raid1", 00:24:12.201 "superblock": true, 00:24:12.201 "num_base_bdevs": 2, 00:24:12.201 "num_base_bdevs_discovered": 1, 00:24:12.201 "num_base_bdevs_operational": 1, 00:24:12.201 "base_bdevs_list": [ 00:24:12.201 { 00:24:12.201 "name": null, 00:24:12.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.201 "is_configured": false, 00:24:12.201 "data_offset": 2048, 00:24:12.201 "data_size": 63488 00:24:12.201 }, 00:24:12.201 { 00:24:12.201 "name": "BaseBdev2", 00:24:12.201 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:12.201 "is_configured": true, 00:24:12.201 "data_offset": 2048, 00:24:12.201 "data_size": 63488 00:24:12.201 } 00:24:12.201 ] 00:24:12.201 }' 00:24:12.201 19:08:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.201 19:08:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.768 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.027 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:13.027 "name": "raid_bdev1", 00:24:13.027 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:13.027 "strip_size_kb": 0, 00:24:13.027 "state": "online", 00:24:13.027 "raid_level": "raid1", 00:24:13.027 "superblock": true, 00:24:13.027 "num_base_bdevs": 2, 00:24:13.027 "num_base_bdevs_discovered": 1, 00:24:13.027 "num_base_bdevs_operational": 1, 00:24:13.027 "base_bdevs_list": [ 00:24:13.027 { 00:24:13.027 "name": null, 00:24:13.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.027 "is_configured": false, 00:24:13.027 "data_offset": 2048, 00:24:13.027 "data_size": 63488 00:24:13.027 }, 00:24:13.027 { 00:24:13.027 "name": "BaseBdev2", 00:24:13.027 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:13.027 "is_configured": true, 00:24:13.027 "data_offset": 2048, 00:24:13.027 "data_size": 63488 00:24:13.027 } 00:24:13.027 ] 00:24:13.027 }' 00:24:13.027 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:13.027 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:13.027 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:13.027 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:13.027 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:13.286 [2024-06-10 19:08:27.822308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.286 [2024-06-10 19:08:27.827116] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b1ac0 00:24:13.286 [2024-06-10 19:08:27.828482] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:13.286 19:08:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:14.221 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.221 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:14.222 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:14.222 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:14.222 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:14.222 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.222 19:08:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.480 "name": "raid_bdev1", 00:24:14.480 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:14.480 "strip_size_kb": 0, 00:24:14.480 "state": "online", 00:24:14.480 "raid_level": "raid1", 00:24:14.480 "superblock": true, 00:24:14.480 "num_base_bdevs": 2, 00:24:14.480 "num_base_bdevs_discovered": 2, 00:24:14.480 "num_base_bdevs_operational": 2, 00:24:14.480 "process": { 00:24:14.480 "type": "rebuild", 00:24:14.480 "target": "spare", 00:24:14.480 "progress": { 00:24:14.480 "blocks": 24576, 00:24:14.480 "percent": 38 00:24:14.480 } 00:24:14.480 }, 00:24:14.480 "base_bdevs_list": [ 00:24:14.480 { 00:24:14.480 "name": "spare", 00:24:14.480 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:14.480 "is_configured": true, 00:24:14.480 "data_offset": 2048, 00:24:14.480 "data_size": 63488 00:24:14.480 }, 00:24:14.480 { 00:24:14.480 "name": "BaseBdev2", 00:24:14.480 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:14.480 "is_configured": true, 00:24:14.480 "data_offset": 2048, 00:24:14.480 "data_size": 63488 00:24:14.480 } 00:24:14.480 ] 00:24:14.480 }' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:14.480 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=734 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.480 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.739 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.739 "name": "raid_bdev1", 00:24:14.739 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:14.739 "strip_size_kb": 0, 00:24:14.739 "state": "online", 00:24:14.739 "raid_level": "raid1", 00:24:14.739 "superblock": true, 00:24:14.739 "num_base_bdevs": 2, 00:24:14.739 "num_base_bdevs_discovered": 2, 00:24:14.739 "num_base_bdevs_operational": 2, 00:24:14.739 "process": { 00:24:14.739 "type": "rebuild", 00:24:14.739 "target": "spare", 00:24:14.739 "progress": { 00:24:14.739 "blocks": 30720, 00:24:14.739 "percent": 48 00:24:14.739 } 00:24:14.739 }, 00:24:14.739 "base_bdevs_list": [ 00:24:14.739 { 00:24:14.739 "name": "spare", 00:24:14.739 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:14.739 "is_configured": true, 00:24:14.739 "data_offset": 2048, 00:24:14.739 "data_size": 63488 00:24:14.739 }, 00:24:14.739 { 00:24:14.739 "name": "BaseBdev2", 00:24:14.739 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:14.739 "is_configured": true, 00:24:14.739 "data_offset": 2048, 00:24:14.739 "data_size": 63488 00:24:14.739 } 00:24:14.739 ] 00:24:14.739 }' 00:24:14.739 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.739 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.739 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.739 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.739 19:08:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:16.186 "name": "raid_bdev1", 00:24:16.186 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:16.186 "strip_size_kb": 0, 00:24:16.186 "state": "online", 00:24:16.186 "raid_level": "raid1", 00:24:16.186 "superblock": true, 00:24:16.186 "num_base_bdevs": 2, 00:24:16.186 "num_base_bdevs_discovered": 2, 00:24:16.186 "num_base_bdevs_operational": 2, 00:24:16.186 "process": { 00:24:16.186 "type": "rebuild", 00:24:16.186 "target": "spare", 00:24:16.186 "progress": { 00:24:16.186 "blocks": 57344, 00:24:16.186 "percent": 90 00:24:16.186 } 00:24:16.186 }, 00:24:16.186 "base_bdevs_list": [ 00:24:16.186 { 00:24:16.186 "name": "spare", 00:24:16.186 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:16.186 "is_configured": true, 00:24:16.186 "data_offset": 2048, 00:24:16.186 "data_size": 63488 00:24:16.186 }, 00:24:16.186 { 00:24:16.186 "name": "BaseBdev2", 00:24:16.186 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:16.186 "is_configured": true, 00:24:16.186 "data_offset": 2048, 00:24:16.186 "data_size": 63488 00:24:16.186 } 00:24:16.186 ] 00:24:16.186 }' 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.186 19:08:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:16.444 [2024-06-10 19:08:30.951044] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:16.444 [2024-06-10 19:08:30.951097] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:16.444 [2024-06-10 19:08:30.951175] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.381 19:08:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.381 "name": "raid_bdev1", 00:24:17.381 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:17.381 "strip_size_kb": 0, 00:24:17.381 "state": "online", 00:24:17.381 "raid_level": "raid1", 00:24:17.381 "superblock": true, 00:24:17.381 "num_base_bdevs": 2, 00:24:17.381 "num_base_bdevs_discovered": 2, 00:24:17.381 "num_base_bdevs_operational": 2, 00:24:17.381 "base_bdevs_list": [ 00:24:17.381 { 00:24:17.381 "name": "spare", 00:24:17.381 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:17.381 "is_configured": true, 00:24:17.381 "data_offset": 2048, 00:24:17.381 "data_size": 63488 00:24:17.381 }, 00:24:17.381 { 00:24:17.381 "name": "BaseBdev2", 00:24:17.381 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:17.381 "is_configured": true, 00:24:17.381 "data_offset": 2048, 00:24:17.381 "data_size": 63488 00:24:17.381 } 00:24:17.381 ] 00:24:17.381 }' 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.381 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.640 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.640 "name": "raid_bdev1", 00:24:17.640 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:17.640 "strip_size_kb": 0, 00:24:17.640 "state": "online", 00:24:17.640 "raid_level": "raid1", 00:24:17.640 "superblock": true, 00:24:17.640 "num_base_bdevs": 2, 00:24:17.640 "num_base_bdevs_discovered": 2, 00:24:17.640 "num_base_bdevs_operational": 2, 00:24:17.640 "base_bdevs_list": [ 00:24:17.640 { 00:24:17.640 "name": "spare", 00:24:17.640 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:17.640 "is_configured": true, 00:24:17.640 "data_offset": 2048, 00:24:17.640 "data_size": 63488 00:24:17.640 }, 00:24:17.640 { 00:24:17.640 "name": "BaseBdev2", 00:24:17.640 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:17.641 "is_configured": true, 00:24:17.641 "data_offset": 2048, 00:24:17.641 "data_size": 63488 00:24:17.641 } 00:24:17.641 ] 00:24:17.641 }' 00:24:17.641 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.641 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:17.641 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.900 "name": "raid_bdev1", 00:24:17.900 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:17.900 "strip_size_kb": 0, 00:24:17.900 "state": "online", 00:24:17.900 "raid_level": "raid1", 00:24:17.900 "superblock": true, 00:24:17.900 "num_base_bdevs": 2, 00:24:17.900 "num_base_bdevs_discovered": 2, 00:24:17.900 "num_base_bdevs_operational": 2, 00:24:17.900 "base_bdevs_list": [ 00:24:17.900 { 00:24:17.900 "name": "spare", 00:24:17.900 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:17.900 "is_configured": true, 00:24:17.900 "data_offset": 2048, 00:24:17.900 "data_size": 63488 00:24:17.900 }, 00:24:17.900 { 00:24:17.900 "name": "BaseBdev2", 00:24:17.900 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:17.900 "is_configured": true, 00:24:17.900 "data_offset": 2048, 00:24:17.900 "data_size": 63488 00:24:17.900 } 00:24:17.900 ] 00:24:17.900 }' 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.900 19:08:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.468 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:18.727 [2024-06-10 19:08:33.409798] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:18.727 [2024-06-10 19:08:33.409824] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:18.727 [2024-06-10 19:08:33.409881] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.727 [2024-06-10 19:08:33.409930] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.727 [2024-06-10 19:08:33.409941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2311d90 name raid_bdev1, state offline 00:24:18.727 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.727 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:18.985 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:19.244 /dev/nbd0 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.245 1+0 records in 00:24:19.245 1+0 records out 00:24:19.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249251 s, 16.4 MB/s 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.245 19:08:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:19.504 /dev/nbd1 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.504 1+0 records in 00:24:19.504 1+0 records out 00:24:19.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221331 s, 18.5 MB/s 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.504 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.763 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:20.022 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:20.280 19:08:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:20.539 [2024-06-10 19:08:35.087158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:20.539 [2024-06-10 19:08:35.087201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.539 [2024-06-10 19:08:35.087221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2311830 00:24:20.539 [2024-06-10 19:08:35.087233] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.539 [2024-06-10 19:08:35.088753] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.539 [2024-06-10 19:08:35.088782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:20.539 [2024-06-10 19:08:35.088853] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:20.539 [2024-06-10 19:08:35.088878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.539 [2024-06-10 19:08:35.088974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.539 spare 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.539 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.539 [2024-06-10 19:08:35.189283] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2317660 00:24:20.539 [2024-06-10 19:08:35.189300] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:20.539 [2024-06-10 19:08:35.189483] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2312070 00:24:20.539 [2024-06-10 19:08:35.189637] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2317660 00:24:20.539 [2024-06-10 19:08:35.189648] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2317660 00:24:20.539 [2024-06-10 19:08:35.189748] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.798 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.798 "name": "raid_bdev1", 00:24:20.798 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:20.798 "strip_size_kb": 0, 00:24:20.798 "state": "online", 00:24:20.798 "raid_level": "raid1", 00:24:20.798 "superblock": true, 00:24:20.798 "num_base_bdevs": 2, 00:24:20.798 "num_base_bdevs_discovered": 2, 00:24:20.798 "num_base_bdevs_operational": 2, 00:24:20.798 "base_bdevs_list": [ 00:24:20.798 { 00:24:20.798 "name": "spare", 00:24:20.798 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:20.798 "is_configured": true, 00:24:20.798 "data_offset": 2048, 00:24:20.798 "data_size": 63488 00:24:20.799 }, 00:24:20.799 { 00:24:20.799 "name": "BaseBdev2", 00:24:20.799 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:20.799 "is_configured": true, 00:24:20.799 "data_offset": 2048, 00:24:20.799 "data_size": 63488 00:24:20.799 } 00:24:20.799 ] 00:24:20.799 }' 00:24:20.799 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.799 19:08:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.367 19:08:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:21.626 "name": "raid_bdev1", 00:24:21.626 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:21.626 "strip_size_kb": 0, 00:24:21.626 "state": "online", 00:24:21.626 "raid_level": "raid1", 00:24:21.626 "superblock": true, 00:24:21.626 "num_base_bdevs": 2, 00:24:21.626 "num_base_bdevs_discovered": 2, 00:24:21.626 "num_base_bdevs_operational": 2, 00:24:21.626 "base_bdevs_list": [ 00:24:21.626 { 00:24:21.626 "name": "spare", 00:24:21.626 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:21.626 "is_configured": true, 00:24:21.626 "data_offset": 2048, 00:24:21.626 "data_size": 63488 00:24:21.626 }, 00:24:21.626 { 00:24:21.626 "name": "BaseBdev2", 00:24:21.626 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:21.626 "is_configured": true, 00:24:21.626 "data_offset": 2048, 00:24:21.626 "data_size": 63488 00:24:21.626 } 00:24:21.626 ] 00:24:21.626 }' 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.626 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:21.885 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.885 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:22.144 [2024-06-10 19:08:36.675442] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.144 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.403 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.403 "name": "raid_bdev1", 00:24:22.403 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:22.403 "strip_size_kb": 0, 00:24:22.403 "state": "online", 00:24:22.403 "raid_level": "raid1", 00:24:22.403 "superblock": true, 00:24:22.403 "num_base_bdevs": 2, 00:24:22.403 "num_base_bdevs_discovered": 1, 00:24:22.403 "num_base_bdevs_operational": 1, 00:24:22.403 "base_bdevs_list": [ 00:24:22.403 { 00:24:22.403 "name": null, 00:24:22.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.403 "is_configured": false, 00:24:22.403 "data_offset": 2048, 00:24:22.403 "data_size": 63488 00:24:22.403 }, 00:24:22.403 { 00:24:22.403 "name": "BaseBdev2", 00:24:22.404 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:22.404 "is_configured": true, 00:24:22.404 "data_offset": 2048, 00:24:22.404 "data_size": 63488 00:24:22.404 } 00:24:22.404 ] 00:24:22.404 }' 00:24:22.404 19:08:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.404 19:08:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.972 19:08:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:22.972 [2024-06-10 19:08:37.706171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:22.972 [2024-06-10 19:08:37.706316] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:22.972 [2024-06-10 19:08:37.706333] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:22.972 [2024-06-10 19:08:37.706360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:22.972 [2024-06-10 19:08:37.711011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20151c0 00:24:22.972 [2024-06-10 19:08:37.713153] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:22.972 19:08:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.349 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:24.349 "name": "raid_bdev1", 00:24:24.349 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:24.349 "strip_size_kb": 0, 00:24:24.349 "state": "online", 00:24:24.349 "raid_level": "raid1", 00:24:24.349 "superblock": true, 00:24:24.349 "num_base_bdevs": 2, 00:24:24.349 "num_base_bdevs_discovered": 2, 00:24:24.349 "num_base_bdevs_operational": 2, 00:24:24.349 "process": { 00:24:24.349 "type": "rebuild", 00:24:24.349 "target": "spare", 00:24:24.349 "progress": { 00:24:24.349 "blocks": 24576, 00:24:24.349 "percent": 38 00:24:24.349 } 00:24:24.349 }, 00:24:24.349 "base_bdevs_list": [ 00:24:24.349 { 00:24:24.350 "name": "spare", 00:24:24.350 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:24.350 "is_configured": true, 00:24:24.350 "data_offset": 2048, 00:24:24.350 "data_size": 63488 00:24:24.350 }, 00:24:24.350 { 00:24:24.350 "name": "BaseBdev2", 00:24:24.350 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:24.350 "is_configured": true, 00:24:24.350 "data_offset": 2048, 00:24:24.350 "data_size": 63488 00:24:24.350 } 00:24:24.350 ] 00:24:24.350 }' 00:24:24.350 19:08:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:24.350 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.350 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:24.350 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.350 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:24.609 [2024-06-10 19:08:39.255855] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.609 [2024-06-10 19:08:39.324847] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:24.609 [2024-06-10 19:08:39.324890] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.609 [2024-06-10 19:08:39.324904] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:24.609 [2024-06-10 19:08:39.324912] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.609 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.868 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.868 "name": "raid_bdev1", 00:24:24.868 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:24.868 "strip_size_kb": 0, 00:24:24.868 "state": "online", 00:24:24.868 "raid_level": "raid1", 00:24:24.868 "superblock": true, 00:24:24.868 "num_base_bdevs": 2, 00:24:24.868 "num_base_bdevs_discovered": 1, 00:24:24.868 "num_base_bdevs_operational": 1, 00:24:24.868 "base_bdevs_list": [ 00:24:24.868 { 00:24:24.868 "name": null, 00:24:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.868 "is_configured": false, 00:24:24.868 "data_offset": 2048, 00:24:24.868 "data_size": 63488 00:24:24.868 }, 00:24:24.868 { 00:24:24.868 "name": "BaseBdev2", 00:24:24.868 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:24.868 "is_configured": true, 00:24:24.868 "data_offset": 2048, 00:24:24.868 "data_size": 63488 00:24:24.868 } 00:24:24.868 ] 00:24:24.868 }' 00:24:24.868 19:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.868 19:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.436 19:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:25.695 [2024-06-10 19:08:40.295567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:25.695 [2024-06-10 19:08:40.295625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.695 [2024-06-10 19:08:40.295646] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2310210 00:24:25.695 [2024-06-10 19:08:40.295657] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.695 [2024-06-10 19:08:40.296014] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.695 [2024-06-10 19:08:40.296031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:25.695 [2024-06-10 19:08:40.296109] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:25.695 [2024-06-10 19:08:40.296120] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:25.695 [2024-06-10 19:08:40.296130] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:25.695 [2024-06-10 19:08:40.296148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:25.695 [2024-06-10 19:08:40.300794] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2312400 00:24:25.695 spare 00:24:25.695 [2024-06-10 19:08:40.302155] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:25.695 19:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.632 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.891 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:26.891 "name": "raid_bdev1", 00:24:26.891 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:26.891 "strip_size_kb": 0, 00:24:26.891 "state": "online", 00:24:26.891 "raid_level": "raid1", 00:24:26.891 "superblock": true, 00:24:26.891 "num_base_bdevs": 2, 00:24:26.891 "num_base_bdevs_discovered": 2, 00:24:26.891 "num_base_bdevs_operational": 2, 00:24:26.891 "process": { 00:24:26.891 "type": "rebuild", 00:24:26.891 "target": "spare", 00:24:26.891 "progress": { 00:24:26.891 "blocks": 24576, 00:24:26.891 "percent": 38 00:24:26.891 } 00:24:26.891 }, 00:24:26.891 "base_bdevs_list": [ 00:24:26.891 { 00:24:26.891 "name": "spare", 00:24:26.891 "uuid": "db4f0f53-c369-5dab-ac6e-32cf2633ea8b", 00:24:26.891 "is_configured": true, 00:24:26.891 "data_offset": 2048, 00:24:26.891 "data_size": 63488 00:24:26.891 }, 00:24:26.891 { 00:24:26.891 "name": "BaseBdev2", 00:24:26.891 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:26.891 "is_configured": true, 00:24:26.891 "data_offset": 2048, 00:24:26.891 "data_size": 63488 00:24:26.891 } 00:24:26.891 ] 00:24:26.891 }' 00:24:26.891 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:26.891 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:26.891 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:26.891 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:26.891 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:27.150 [2024-06-10 19:08:41.853237] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:27.410 [2024-06-10 19:08:41.913775] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:27.410 [2024-06-10 19:08:41.913822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.410 [2024-06-10 19:08:41.913836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:27.410 [2024-06-10 19:08:41.913844] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.410 19:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.669 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.669 "name": "raid_bdev1", 00:24:27.669 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:27.669 "strip_size_kb": 0, 00:24:27.669 "state": "online", 00:24:27.669 "raid_level": "raid1", 00:24:27.669 "superblock": true, 00:24:27.669 "num_base_bdevs": 2, 00:24:27.669 "num_base_bdevs_discovered": 1, 00:24:27.669 "num_base_bdevs_operational": 1, 00:24:27.669 "base_bdevs_list": [ 00:24:27.669 { 00:24:27.669 "name": null, 00:24:27.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.669 "is_configured": false, 00:24:27.669 "data_offset": 2048, 00:24:27.669 "data_size": 63488 00:24:27.669 }, 00:24:27.669 { 00:24:27.669 "name": "BaseBdev2", 00:24:27.669 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:27.669 "is_configured": true, 00:24:27.669 "data_offset": 2048, 00:24:27.669 "data_size": 63488 00:24:27.669 } 00:24:27.669 ] 00:24:27.669 }' 00:24:27.669 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.669 19:08:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.234 "name": "raid_bdev1", 00:24:28.234 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:28.234 "strip_size_kb": 0, 00:24:28.234 "state": "online", 00:24:28.234 "raid_level": "raid1", 00:24:28.234 "superblock": true, 00:24:28.234 "num_base_bdevs": 2, 00:24:28.234 "num_base_bdevs_discovered": 1, 00:24:28.234 "num_base_bdevs_operational": 1, 00:24:28.234 "base_bdevs_list": [ 00:24:28.234 { 00:24:28.234 "name": null, 00:24:28.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.234 "is_configured": false, 00:24:28.234 "data_offset": 2048, 00:24:28.234 "data_size": 63488 00:24:28.234 }, 00:24:28.234 { 00:24:28.234 "name": "BaseBdev2", 00:24:28.234 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:28.234 "is_configured": true, 00:24:28.234 "data_offset": 2048, 00:24:28.234 "data_size": 63488 00:24:28.234 } 00:24:28.234 ] 00:24:28.234 }' 00:24:28.234 19:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.492 19:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:28.492 19:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.492 19:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:28.492 19:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:28.774 19:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:28.774 [2024-06-10 19:08:43.466094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:28.774 [2024-06-10 19:08:43.466144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.774 [2024-06-10 19:08:43.466164] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23124d0 00:24:28.774 [2024-06-10 19:08:43.466176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.774 [2024-06-10 19:08:43.466506] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.774 [2024-06-10 19:08:43.466522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:28.774 [2024-06-10 19:08:43.466597] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:28.774 [2024-06-10 19:08:43.466609] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:28.774 [2024-06-10 19:08:43.466619] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:28.774 BaseBdev1 00:24:28.774 19:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.150 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.150 "name": "raid_bdev1", 00:24:30.150 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:30.151 "strip_size_kb": 0, 00:24:30.151 "state": "online", 00:24:30.151 "raid_level": "raid1", 00:24:30.151 "superblock": true, 00:24:30.151 "num_base_bdevs": 2, 00:24:30.151 "num_base_bdevs_discovered": 1, 00:24:30.151 "num_base_bdevs_operational": 1, 00:24:30.151 "base_bdevs_list": [ 00:24:30.151 { 00:24:30.151 "name": null, 00:24:30.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.151 "is_configured": false, 00:24:30.151 "data_offset": 2048, 00:24:30.151 "data_size": 63488 00:24:30.151 }, 00:24:30.151 { 00:24:30.151 "name": "BaseBdev2", 00:24:30.151 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:30.151 "is_configured": true, 00:24:30.151 "data_offset": 2048, 00:24:30.151 "data_size": 63488 00:24:30.151 } 00:24:30.151 ] 00:24:30.151 }' 00:24:30.151 19:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.151 19:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.719 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.978 "name": "raid_bdev1", 00:24:30.978 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:30.978 "strip_size_kb": 0, 00:24:30.978 "state": "online", 00:24:30.978 "raid_level": "raid1", 00:24:30.978 "superblock": true, 00:24:30.978 "num_base_bdevs": 2, 00:24:30.978 "num_base_bdevs_discovered": 1, 00:24:30.978 "num_base_bdevs_operational": 1, 00:24:30.978 "base_bdevs_list": [ 00:24:30.978 { 00:24:30.978 "name": null, 00:24:30.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.978 "is_configured": false, 00:24:30.978 "data_offset": 2048, 00:24:30.978 "data_size": 63488 00:24:30.978 }, 00:24:30.978 { 00:24:30.978 "name": "BaseBdev2", 00:24:30.978 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:30.978 "is_configured": true, 00:24:30.978 "data_offset": 2048, 00:24:30.978 "data_size": 63488 00:24:30.978 } 00:24:30.978 ] 00:24:30.978 }' 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # local es=0 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:30.978 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:31.237 [2024-06-10 19:08:45.832339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:31.237 [2024-06-10 19:08:45.832459] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:31.237 [2024-06-10 19:08:45.832475] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:31.237 request: 00:24:31.237 { 00:24:31.237 "raid_bdev": "raid_bdev1", 00:24:31.237 "base_bdev": "BaseBdev1", 00:24:31.237 "method": "bdev_raid_add_base_bdev", 00:24:31.237 "req_id": 1 00:24:31.237 } 00:24:31.237 Got JSON-RPC error response 00:24:31.237 response: 00:24:31.237 { 00:24:31.237 "code": -22, 00:24:31.237 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:31.237 } 00:24:31.237 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # es=1 00:24:31.237 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:31.237 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:31.237 19:08:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:31.237 19:08:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.174 19:08:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.433 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.433 "name": "raid_bdev1", 00:24:32.433 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:32.433 "strip_size_kb": 0, 00:24:32.433 "state": "online", 00:24:32.433 "raid_level": "raid1", 00:24:32.433 "superblock": true, 00:24:32.433 "num_base_bdevs": 2, 00:24:32.433 "num_base_bdevs_discovered": 1, 00:24:32.433 "num_base_bdevs_operational": 1, 00:24:32.433 "base_bdevs_list": [ 00:24:32.433 { 00:24:32.433 "name": null, 00:24:32.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.433 "is_configured": false, 00:24:32.433 "data_offset": 2048, 00:24:32.433 "data_size": 63488 00:24:32.433 }, 00:24:32.433 { 00:24:32.433 "name": "BaseBdev2", 00:24:32.433 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:32.433 "is_configured": true, 00:24:32.433 "data_offset": 2048, 00:24:32.433 "data_size": 63488 00:24:32.433 } 00:24:32.433 ] 00:24:32.433 }' 00:24:32.433 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.433 19:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.001 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.260 "name": "raid_bdev1", 00:24:33.260 "uuid": "25c30b98-28f2-4002-9353-76e2d72e9ec0", 00:24:33.260 "strip_size_kb": 0, 00:24:33.260 "state": "online", 00:24:33.260 "raid_level": "raid1", 00:24:33.260 "superblock": true, 00:24:33.260 "num_base_bdevs": 2, 00:24:33.260 "num_base_bdevs_discovered": 1, 00:24:33.260 "num_base_bdevs_operational": 1, 00:24:33.260 "base_bdevs_list": [ 00:24:33.260 { 00:24:33.260 "name": null, 00:24:33.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.260 "is_configured": false, 00:24:33.260 "data_offset": 2048, 00:24:33.260 "data_size": 63488 00:24:33.260 }, 00:24:33.260 { 00:24:33.260 "name": "BaseBdev2", 00:24:33.260 "uuid": "d17e6e2e-d6ad-59bb-b3c7-34581734c798", 00:24:33.260 "is_configured": true, 00:24:33.260 "data_offset": 2048, 00:24:33.260 "data_size": 63488 00:24:33.260 } 00:24:33.260 ] 00:24:33.260 }' 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 1748774 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1748774 ']' 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # kill -0 1748774 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # uname 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:33.260 19:08:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1748774 00:24:33.519 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:33.519 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:33.519 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1748774' 00:24:33.519 killing process with pid 1748774 00:24:33.519 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # kill 1748774 00:24:33.519 Received shutdown signal, test time was about 60.000000 seconds 00:24:33.519 00:24:33.519 Latency(us) 00:24:33.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.519 =================================================================================================================== 00:24:33.519 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.519 [2024-06-10 19:08:48.042196] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.519 [2024-06-10 19:08:48.042281] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.519 [2024-06-10 19:08:48.042319] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.519 [2024-06-10 19:08:48.042331] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2317660 name raid_bdev1, state offline 00:24:33.519 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # wait 1748774 00:24:33.519 [2024-06-10 19:08:48.066370] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:33.519 19:08:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:24:33.519 00:24:33.519 real 0m34.305s 00:24:33.519 user 0m49.096s 00:24:33.520 sys 0m6.639s 00:24:33.520 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:33.520 19:08:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.520 ************************************ 00:24:33.520 END TEST raid_rebuild_test_sb 00:24:33.520 ************************************ 00:24:33.778 19:08:48 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:24:33.778 19:08:48 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:33.778 19:08:48 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:33.778 19:08:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:33.778 ************************************ 00:24:33.778 START TEST raid_rebuild_test_io 00:24:33.778 ************************************ 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 false true true 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:33.778 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1754989 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1754989 /var/tmp/spdk-raid.sock 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@830 -- # '[' -z 1754989 ']' 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:33.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:33.779 19:08:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.779 [2024-06-10 19:08:48.413313] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:24:33.779 [2024-06-10 19:08:48.413370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754989 ] 00:24:33.779 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:33.779 Zero copy mechanism will not be used. 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.0 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.1 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.2 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.3 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.4 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.5 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.6 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:01.7 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.0 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.1 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.2 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.3 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.4 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.5 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.6 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b6:02.7 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.0 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.1 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.2 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.3 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.4 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.5 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.6 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:01.7 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.0 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.1 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.2 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.3 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.4 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.5 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.6 cannot be used 00:24:33.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:33.779 EAL: Requested device 0000:b8:02.7 cannot be used 00:24:34.038 [2024-06-10 19:08:48.546141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.038 [2024-06-10 19:08:48.633247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.038 [2024-06-10 19:08:48.699853] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.038 [2024-06-10 19:08:48.699889] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.605 19:08:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:34.605 19:08:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@863 -- # return 0 00:24:34.605 19:08:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:34.605 19:08:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:34.863 BaseBdev1_malloc 00:24:34.863 19:08:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:35.122 [2024-06-10 19:08:49.745583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:35.122 [2024-06-10 19:08:49.745626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.122 [2024-06-10 19:08:49.745645] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c32200 00:24:35.122 [2024-06-10 19:08:49.745656] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.122 [2024-06-10 19:08:49.747160] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.122 [2024-06-10 19:08:49.747186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:35.122 BaseBdev1 00:24:35.122 19:08:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:35.122 19:08:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:35.381 BaseBdev2_malloc 00:24:35.381 19:08:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:35.639 [2024-06-10 19:08:50.199175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:35.639 [2024-06-10 19:08:50.199219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.639 [2024-06-10 19:08:50.199237] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc9d90 00:24:35.639 [2024-06-10 19:08:50.199248] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.639 [2024-06-10 19:08:50.200747] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.639 [2024-06-10 19:08:50.200773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:35.639 BaseBdev2 00:24:35.639 19:08:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:35.898 spare_malloc 00:24:35.898 19:08:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:36.157 spare_delay 00:24:36.157 19:08:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:36.157 [2024-06-10 19:08:50.865284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:36.157 [2024-06-10 19:08:50.865326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.157 [2024-06-10 19:08:50.865345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c2ab80 00:24:36.157 [2024-06-10 19:08:50.865357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.157 [2024-06-10 19:08:50.866701] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.157 [2024-06-10 19:08:50.866727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:36.157 spare 00:24:36.157 19:08:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:36.416 [2024-06-10 19:08:51.077861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:36.416 [2024-06-10 19:08:51.078982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:36.416 [2024-06-10 19:08:51.079051] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c2bd90 00:24:36.416 [2024-06-10 19:08:51.079061] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:36.416 [2024-06-10 19:08:51.079236] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dccee0 00:24:36.416 [2024-06-10 19:08:51.079363] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c2bd90 00:24:36.416 [2024-06-10 19:08:51.079372] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c2bd90 00:24:36.416 [2024-06-10 19:08:51.079472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.416 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.675 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.675 "name": "raid_bdev1", 00:24:36.675 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:36.675 "strip_size_kb": 0, 00:24:36.675 "state": "online", 00:24:36.675 "raid_level": "raid1", 00:24:36.675 "superblock": false, 00:24:36.675 "num_base_bdevs": 2, 00:24:36.675 "num_base_bdevs_discovered": 2, 00:24:36.675 "num_base_bdevs_operational": 2, 00:24:36.675 "base_bdevs_list": [ 00:24:36.675 { 00:24:36.675 "name": "BaseBdev1", 00:24:36.675 "uuid": "e0ad020f-838b-5e72-8282-246204d49fac", 00:24:36.675 "is_configured": true, 00:24:36.675 "data_offset": 0, 00:24:36.675 "data_size": 65536 00:24:36.675 }, 00:24:36.675 { 00:24:36.675 "name": "BaseBdev2", 00:24:36.675 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:36.675 "is_configured": true, 00:24:36.675 "data_offset": 0, 00:24:36.675 "data_size": 65536 00:24:36.675 } 00:24:36.675 ] 00:24:36.675 }' 00:24:36.675 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.675 19:08:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.241 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:37.241 19:08:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:37.500 [2024-06-10 19:08:52.068671] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.500 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:24:37.500 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.500 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:37.759 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:24:37.759 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:37.759 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:37.759 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:37.759 [2024-06-10 19:08:52.423405] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2b6c0 00:24:37.759 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:37.759 Zero copy mechanism will not be used. 00:24:37.759 Running I/O for 60 seconds... 00:24:38.019 [2024-06-10 19:08:52.528538] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:38.019 [2024-06-10 19:08:52.536010] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c2b6c0 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.019 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.278 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.278 "name": "raid_bdev1", 00:24:38.278 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:38.278 "strip_size_kb": 0, 00:24:38.278 "state": "online", 00:24:38.278 "raid_level": "raid1", 00:24:38.278 "superblock": false, 00:24:38.278 "num_base_bdevs": 2, 00:24:38.278 "num_base_bdevs_discovered": 1, 00:24:38.278 "num_base_bdevs_operational": 1, 00:24:38.278 "base_bdevs_list": [ 00:24:38.278 { 00:24:38.278 "name": null, 00:24:38.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.278 "is_configured": false, 00:24:38.278 "data_offset": 0, 00:24:38.278 "data_size": 65536 00:24:38.278 }, 00:24:38.278 { 00:24:38.278 "name": "BaseBdev2", 00:24:38.278 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:38.278 "is_configured": true, 00:24:38.278 "data_offset": 0, 00:24:38.278 "data_size": 65536 00:24:38.278 } 00:24:38.278 ] 00:24:38.278 }' 00:24:38.278 19:08:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.278 19:08:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.876 19:08:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:38.876 [2024-06-10 19:08:53.623678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.135 19:08:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:39.135 [2024-06-10 19:08:53.692920] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2ff30 00:24:39.135 [2024-06-10 19:08:53.695302] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.135 [2024-06-10 19:08:53.797432] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:39.135 [2024-06-10 19:08:53.797742] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:39.394 [2024-06-10 19:08:54.033017] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:39.394 [2024-06-10 19:08:54.033184] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:39.653 [2024-06-10 19:08:54.385301] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:39.912 [2024-06-10 19:08:54.619684] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.171 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.171 [2024-06-10 19:08:54.899280] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:40.430 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:40.430 "name": "raid_bdev1", 00:24:40.430 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:40.430 "strip_size_kb": 0, 00:24:40.430 "state": "online", 00:24:40.430 "raid_level": "raid1", 00:24:40.430 "superblock": false, 00:24:40.430 "num_base_bdevs": 2, 00:24:40.430 "num_base_bdevs_discovered": 2, 00:24:40.430 "num_base_bdevs_operational": 2, 00:24:40.430 "process": { 00:24:40.430 "type": "rebuild", 00:24:40.430 "target": "spare", 00:24:40.430 "progress": { 00:24:40.430 "blocks": 12288, 00:24:40.430 "percent": 18 00:24:40.430 } 00:24:40.430 }, 00:24:40.430 "base_bdevs_list": [ 00:24:40.430 { 00:24:40.430 "name": "spare", 00:24:40.430 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:40.430 "is_configured": true, 00:24:40.430 "data_offset": 0, 00:24:40.430 "data_size": 65536 00:24:40.430 }, 00:24:40.430 { 00:24:40.430 "name": "BaseBdev2", 00:24:40.430 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:40.430 "is_configured": true, 00:24:40.430 "data_offset": 0, 00:24:40.430 "data_size": 65536 00:24:40.430 } 00:24:40.430 ] 00:24:40.430 }' 00:24:40.430 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:40.430 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.430 19:08:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:40.430 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.430 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:40.430 [2024-06-10 19:08:55.044270] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:40.689 [2024-06-10 19:08:55.221115] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.689 [2024-06-10 19:08:55.280698] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:40.689 [2024-06-10 19:08:55.390225] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:40.689 [2024-06-10 19:08:55.392016] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.689 [2024-06-10 19:08:55.392045] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.689 [2024-06-10 19:08:55.392056] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:40.689 [2024-06-10 19:08:55.405388] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c2b6c0 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.689 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.690 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.690 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.949 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.949 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.949 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.949 "name": "raid_bdev1", 00:24:40.949 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:40.949 "strip_size_kb": 0, 00:24:40.949 "state": "online", 00:24:40.949 "raid_level": "raid1", 00:24:40.949 "superblock": false, 00:24:40.949 "num_base_bdevs": 2, 00:24:40.949 "num_base_bdevs_discovered": 1, 00:24:40.949 "num_base_bdevs_operational": 1, 00:24:40.949 "base_bdevs_list": [ 00:24:40.949 { 00:24:40.949 "name": null, 00:24:40.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.949 "is_configured": false, 00:24:40.949 "data_offset": 0, 00:24:40.949 "data_size": 65536 00:24:40.949 }, 00:24:40.949 { 00:24:40.949 "name": "BaseBdev2", 00:24:40.949 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:40.949 "is_configured": true, 00:24:40.949 "data_offset": 0, 00:24:40.949 "data_size": 65536 00:24:40.949 } 00:24:40.949 ] 00:24:40.949 }' 00:24:40.949 19:08:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.949 19:08:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.518 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.776 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.776 "name": "raid_bdev1", 00:24:41.776 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:41.776 "strip_size_kb": 0, 00:24:41.776 "state": "online", 00:24:41.776 "raid_level": "raid1", 00:24:41.776 "superblock": false, 00:24:41.776 "num_base_bdevs": 2, 00:24:41.776 "num_base_bdevs_discovered": 1, 00:24:41.776 "num_base_bdevs_operational": 1, 00:24:41.776 "base_bdevs_list": [ 00:24:41.776 { 00:24:41.776 "name": null, 00:24:41.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.776 "is_configured": false, 00:24:41.776 "data_offset": 0, 00:24:41.776 "data_size": 65536 00:24:41.776 }, 00:24:41.776 { 00:24:41.776 "name": "BaseBdev2", 00:24:41.776 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:41.776 "is_configured": true, 00:24:41.776 "data_offset": 0, 00:24:41.776 "data_size": 65536 00:24:41.776 } 00:24:41.776 ] 00:24:41.776 }' 00:24:41.776 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:42.035 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:42.035 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:42.035 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:42.035 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.035 [2024-06-10 19:08:56.783133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.294 [2024-06-10 19:08:56.814970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2f920 00:24:42.294 [2024-06-10 19:08:56.816355] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:42.294 19:08:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:42.294 [2024-06-10 19:08:56.917825] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:42.294 [2024-06-10 19:08:56.918197] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:42.552 [2024-06-10 19:08:57.144936] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:42.552 [2024-06-10 19:08:57.145183] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:42.811 [2024-06-10 19:08:57.390483] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:43.070 [2024-06-10 19:08:57.783686] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:43.070 [2024-06-10 19:08:57.783947] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:43.070 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.330 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:43.330 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:43.330 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:43.330 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:43.330 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.330 19:08:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.330 [2024-06-10 19:08:58.030778] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:43.330 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.330 "name": "raid_bdev1", 00:24:43.330 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:43.330 "strip_size_kb": 0, 00:24:43.330 "state": "online", 00:24:43.330 "raid_level": "raid1", 00:24:43.330 "superblock": false, 00:24:43.330 "num_base_bdevs": 2, 00:24:43.330 "num_base_bdevs_discovered": 2, 00:24:43.330 "num_base_bdevs_operational": 2, 00:24:43.330 "process": { 00:24:43.330 "type": "rebuild", 00:24:43.330 "target": "spare", 00:24:43.330 "progress": { 00:24:43.330 "blocks": 16384, 00:24:43.330 "percent": 25 00:24:43.330 } 00:24:43.330 }, 00:24:43.330 "base_bdevs_list": [ 00:24:43.330 { 00:24:43.330 "name": "spare", 00:24:43.330 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:43.330 "is_configured": true, 00:24:43.330 "data_offset": 0, 00:24:43.330 "data_size": 65536 00:24:43.330 }, 00:24:43.330 { 00:24:43.330 "name": "BaseBdev2", 00:24:43.330 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:43.330 "is_configured": true, 00:24:43.330 "data_offset": 0, 00:24:43.330 "data_size": 65536 00:24:43.330 } 00:24:43.330 ] 00:24:43.330 }' 00:24:43.330 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=763 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:43.589 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.590 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:43.590 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:43.590 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:43.590 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:43.590 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.590 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.848 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.848 "name": "raid_bdev1", 00:24:43.848 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:43.848 "strip_size_kb": 0, 00:24:43.848 "state": "online", 00:24:43.848 "raid_level": "raid1", 00:24:43.848 "superblock": false, 00:24:43.848 "num_base_bdevs": 2, 00:24:43.848 "num_base_bdevs_discovered": 2, 00:24:43.848 "num_base_bdevs_operational": 2, 00:24:43.848 "process": { 00:24:43.848 "type": "rebuild", 00:24:43.848 "target": "spare", 00:24:43.848 "progress": { 00:24:43.848 "blocks": 20480, 00:24:43.848 "percent": 31 00:24:43.848 } 00:24:43.848 }, 00:24:43.848 "base_bdevs_list": [ 00:24:43.848 { 00:24:43.848 "name": "spare", 00:24:43.848 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:43.848 "is_configured": true, 00:24:43.848 "data_offset": 0, 00:24:43.848 "data_size": 65536 00:24:43.848 }, 00:24:43.848 { 00:24:43.848 "name": "BaseBdev2", 00:24:43.848 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:43.848 "is_configured": true, 00:24:43.848 "data_offset": 0, 00:24:43.848 "data_size": 65536 00:24:43.848 } 00:24:43.848 ] 00:24:43.848 }' 00:24:43.848 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.848 [2024-06-10 19:08:58.424541] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:43.848 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.848 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.848 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.848 19:08:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:44.784 [2024-06-10 19:08:59.229494] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.784 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.043 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.043 "name": "raid_bdev1", 00:24:45.043 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:45.043 "strip_size_kb": 0, 00:24:45.043 "state": "online", 00:24:45.043 "raid_level": "raid1", 00:24:45.043 "superblock": false, 00:24:45.043 "num_base_bdevs": 2, 00:24:45.043 "num_base_bdevs_discovered": 2, 00:24:45.043 "num_base_bdevs_operational": 2, 00:24:45.043 "process": { 00:24:45.043 "type": "rebuild", 00:24:45.043 "target": "spare", 00:24:45.043 "progress": { 00:24:45.043 "blocks": 40960, 00:24:45.043 "percent": 62 00:24:45.043 } 00:24:45.043 }, 00:24:45.043 "base_bdevs_list": [ 00:24:45.043 { 00:24:45.043 "name": "spare", 00:24:45.043 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:45.043 "is_configured": true, 00:24:45.043 "data_offset": 0, 00:24:45.043 "data_size": 65536 00:24:45.043 }, 00:24:45.043 { 00:24:45.043 "name": "BaseBdev2", 00:24:45.043 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:45.043 "is_configured": true, 00:24:45.043 "data_offset": 0, 00:24:45.043 "data_size": 65536 00:24:45.043 } 00:24:45.043 ] 00:24:45.043 }' 00:24:45.043 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.043 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.043 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.043 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.043 19:08:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:45.301 [2024-06-10 19:08:59.916900] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:45.301 [2024-06-10 19:08:59.917277] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:46.236 [2024-06-10 19:09:00.706854] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:46.236 [2024-06-10 19:09:00.707134] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.236 19:09:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.236 [2024-06-10 19:09:00.918368] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:24:46.495 19:09:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:46.495 "name": "raid_bdev1", 00:24:46.495 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:46.495 "strip_size_kb": 0, 00:24:46.495 "state": "online", 00:24:46.495 "raid_level": "raid1", 00:24:46.495 "superblock": false, 00:24:46.495 "num_base_bdevs": 2, 00:24:46.495 "num_base_bdevs_discovered": 2, 00:24:46.495 "num_base_bdevs_operational": 2, 00:24:46.495 "process": { 00:24:46.495 "type": "rebuild", 00:24:46.495 "target": "spare", 00:24:46.495 "progress": { 00:24:46.495 "blocks": 59392, 00:24:46.495 "percent": 90 00:24:46.495 } 00:24:46.495 }, 00:24:46.495 "base_bdevs_list": [ 00:24:46.495 { 00:24:46.495 "name": "spare", 00:24:46.495 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:46.495 "is_configured": true, 00:24:46.495 "data_offset": 0, 00:24:46.495 "data_size": 65536 00:24:46.495 }, 00:24:46.495 { 00:24:46.495 "name": "BaseBdev2", 00:24:46.495 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:46.495 "is_configured": true, 00:24:46.495 "data_offset": 0, 00:24:46.495 "data_size": 65536 00:24:46.495 } 00:24:46.495 ] 00:24:46.495 }' 00:24:46.495 19:09:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:46.495 19:09:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.495 19:09:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:46.495 19:09:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.495 19:09:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:46.754 [2024-06-10 19:09:01.270915] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:46.754 [2024-06-10 19:09:01.371212] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:46.754 [2024-06-10 19:09:01.372249] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.690 "name": "raid_bdev1", 00:24:47.690 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:47.690 "strip_size_kb": 0, 00:24:47.690 "state": "online", 00:24:47.690 "raid_level": "raid1", 00:24:47.690 "superblock": false, 00:24:47.690 "num_base_bdevs": 2, 00:24:47.690 "num_base_bdevs_discovered": 2, 00:24:47.690 "num_base_bdevs_operational": 2, 00:24:47.690 "base_bdevs_list": [ 00:24:47.690 { 00:24:47.690 "name": "spare", 00:24:47.690 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:47.690 "is_configured": true, 00:24:47.690 "data_offset": 0, 00:24:47.690 "data_size": 65536 00:24:47.690 }, 00:24:47.690 { 00:24:47.690 "name": "BaseBdev2", 00:24:47.690 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:47.690 "is_configured": true, 00:24:47.690 "data_offset": 0, 00:24:47.690 "data_size": 65536 00:24:47.690 } 00:24:47.690 ] 00:24:47.690 }' 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:47.690 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.948 "name": "raid_bdev1", 00:24:47.948 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:47.948 "strip_size_kb": 0, 00:24:47.948 "state": "online", 00:24:47.948 "raid_level": "raid1", 00:24:47.948 "superblock": false, 00:24:47.948 "num_base_bdevs": 2, 00:24:47.948 "num_base_bdevs_discovered": 2, 00:24:47.948 "num_base_bdevs_operational": 2, 00:24:47.948 "base_bdevs_list": [ 00:24:47.948 { 00:24:47.948 "name": "spare", 00:24:47.948 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:47.948 "is_configured": true, 00:24:47.948 "data_offset": 0, 00:24:47.948 "data_size": 65536 00:24:47.948 }, 00:24:47.948 { 00:24:47.948 "name": "BaseBdev2", 00:24:47.948 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:47.948 "is_configured": true, 00:24:47.948 "data_offset": 0, 00:24:47.948 "data_size": 65536 00:24:47.948 } 00:24:47.948 ] 00:24:47.948 }' 00:24:47.948 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.207 19:09:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.466 19:09:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.466 "name": "raid_bdev1", 00:24:48.466 "uuid": "038bbb64-0794-486d-b800-2b89381116fa", 00:24:48.466 "strip_size_kb": 0, 00:24:48.466 "state": "online", 00:24:48.466 "raid_level": "raid1", 00:24:48.466 "superblock": false, 00:24:48.466 "num_base_bdevs": 2, 00:24:48.466 "num_base_bdevs_discovered": 2, 00:24:48.466 "num_base_bdevs_operational": 2, 00:24:48.466 "base_bdevs_list": [ 00:24:48.466 { 00:24:48.466 "name": "spare", 00:24:48.466 "uuid": "d928dc8f-454c-5240-bbf1-f5b8493e5d72", 00:24:48.466 "is_configured": true, 00:24:48.466 "data_offset": 0, 00:24:48.466 "data_size": 65536 00:24:48.466 }, 00:24:48.466 { 00:24:48.466 "name": "BaseBdev2", 00:24:48.466 "uuid": "8c4c774e-54e9-55e3-ae34-da32f30bbdec", 00:24:48.466 "is_configured": true, 00:24:48.466 "data_offset": 0, 00:24:48.466 "data_size": 65536 00:24:48.466 } 00:24:48.466 ] 00:24:48.466 }' 00:24:48.466 19:09:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.466 19:09:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.034 19:09:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:49.034 [2024-06-10 19:09:03.732177] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.034 [2024-06-10 19:09:03.732205] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:49.293 00:24:49.293 Latency(us) 00:24:49.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.293 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:49.293 raid_bdev1 : 11.38 98.64 295.91 0.00 0.00 13712.55 270.34 117440.51 00:24:49.293 =================================================================================================================== 00:24:49.293 Total : 98.64 295.91 0.00 0.00 13712.55 270.34 117440.51 00:24:49.293 [2024-06-10 19:09:03.832077] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.293 [2024-06-10 19:09:03.832105] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:49.293 [2024-06-10 19:09:03.832168] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:49.293 [2024-06-10 19:09:03.832179] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2bd90 name raid_bdev1, state offline 00:24:49.293 0 00:24:49.293 19:09:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.293 19:09:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.293 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:49.551 /dev/nbd0 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.551 1+0 records in 00:24:49.551 1+0 records out 00:24:49.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262935 s, 15.6 MB/s 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:24:49.551 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.552 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:49.811 /dev/nbd1 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.811 1+0 records in 00:24:49.811 1+0 records out 00:24:49.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326497 s, 12.5 MB/s 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.811 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.071 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.331 19:09:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 1754989 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@949 -- # '[' -z 1754989 ']' 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # kill -0 1754989 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # uname 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1754989 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1754989' 00:24:50.591 killing process with pid 1754989 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # kill 1754989 00:24:50.591 Received shutdown signal, test time was about 12.727409 seconds 00:24:50.591 00:24:50.591 Latency(us) 00:24:50.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.591 =================================================================================================================== 00:24:50.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.591 [2024-06-10 19:09:05.183962] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.591 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # wait 1754989 00:24:50.591 [2024-06-10 19:09:05.202694] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:50.851 00:24:50.851 real 0m17.053s 00:24:50.851 user 0m25.669s 00:24:50.851 sys 0m2.661s 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.851 ************************************ 00:24:50.851 END TEST raid_rebuild_test_io 00:24:50.851 ************************************ 00:24:50.851 19:09:05 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:24:50.851 19:09:05 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:24:50.851 19:09:05 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:50.851 19:09:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:50.851 ************************************ 00:24:50.851 START TEST raid_rebuild_test_sb_io 00:24:50.851 ************************************ 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true true true 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1758548 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1758548 /var/tmp/spdk-raid.sock 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@830 -- # '[' -z 1758548 ']' 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:50.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:50.851 19:09:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.851 [2024-06-10 19:09:05.550392] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:24:50.851 [2024-06-10 19:09:05.550451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758548 ] 00:24:50.851 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:50.851 Zero copy mechanism will not be used. 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.0 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.1 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.2 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.3 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.4 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.5 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.6 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:01.7 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.0 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.1 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.2 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.3 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.4 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.5 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.6 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b6:02.7 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.0 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.1 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.2 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.3 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.4 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.5 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.6 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:01.7 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.0 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.1 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.2 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.3 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.4 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.5 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.6 cannot be used 00:24:51.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.111 EAL: Requested device 0000:b8:02.7 cannot be used 00:24:51.111 [2024-06-10 19:09:05.684129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.111 [2024-06-10 19:09:05.770688] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.111 [2024-06-10 19:09:05.833212] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:51.111 [2024-06-10 19:09:05.833250] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:52.048 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:52.048 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@863 -- # return 0 00:24:52.048 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:52.048 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:52.048 BaseBdev1_malloc 00:24:52.048 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:52.307 [2024-06-10 19:09:06.874039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:52.307 [2024-06-10 19:09:06.874085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.307 [2024-06-10 19:09:06.874107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1928200 00:24:52.307 [2024-06-10 19:09:06.874119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.307 [2024-06-10 19:09:06.875677] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.307 [2024-06-10 19:09:06.875704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:52.307 BaseBdev1 00:24:52.307 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:52.307 19:09:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:52.566 BaseBdev2_malloc 00:24:52.566 19:09:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:52.566 [2024-06-10 19:09:07.271647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:52.566 [2024-06-10 19:09:07.271686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.566 [2024-06-10 19:09:07.271705] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1abfd90 00:24:52.566 [2024-06-10 19:09:07.271717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.566 [2024-06-10 19:09:07.273062] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.566 [2024-06-10 19:09:07.273090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:52.566 BaseBdev2 00:24:52.566 19:09:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:52.826 spare_malloc 00:24:52.826 19:09:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:53.085 spare_delay 00:24:53.085 19:09:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:53.344 [2024-06-10 19:09:07.942003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:53.344 [2024-06-10 19:09:07.942041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.344 [2024-06-10 19:09:07.942061] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1920b80 00:24:53.344 [2024-06-10 19:09:07.942072] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.344 [2024-06-10 19:09:07.943405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.344 [2024-06-10 19:09:07.943432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:53.344 spare 00:24:53.344 19:09:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:53.603 [2024-06-10 19:09:08.166627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:53.603 [2024-06-10 19:09:08.167737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:53.603 [2024-06-10 19:09:08.167886] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1921d90 00:24:53.603 [2024-06-10 19:09:08.167898] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:53.603 [2024-06-10 19:09:08.168061] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac2ee0 00:24:53.603 [2024-06-10 19:09:08.168182] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1921d90 00:24:53.603 [2024-06-10 19:09:08.168191] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1921d90 00:24:53.603 [2024-06-10 19:09:08.168275] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.603 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.862 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.862 "name": "raid_bdev1", 00:24:53.862 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:24:53.862 "strip_size_kb": 0, 00:24:53.862 "state": "online", 00:24:53.862 "raid_level": "raid1", 00:24:53.862 "superblock": true, 00:24:53.862 "num_base_bdevs": 2, 00:24:53.862 "num_base_bdevs_discovered": 2, 00:24:53.862 "num_base_bdevs_operational": 2, 00:24:53.862 "base_bdevs_list": [ 00:24:53.862 { 00:24:53.862 "name": "BaseBdev1", 00:24:53.862 "uuid": "03306313-08d7-5815-bd81-9466d6d26b65", 00:24:53.862 "is_configured": true, 00:24:53.862 "data_offset": 2048, 00:24:53.862 "data_size": 63488 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "name": "BaseBdev2", 00:24:53.862 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:24:53.862 "is_configured": true, 00:24:53.862 "data_offset": 2048, 00:24:53.862 "data_size": 63488 00:24:53.862 } 00:24:53.862 ] 00:24:53.862 }' 00:24:53.862 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.862 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.431 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:54.431 19:09:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:54.690 [2024-06-10 19:09:09.193502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:54.690 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:54.949 [2024-06-10 19:09:09.540195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac1710 00:24:54.949 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:54.949 Zero copy mechanism will not be used. 00:24:54.949 Running I/O for 60 seconds... 00:24:54.949 [2024-06-10 19:09:09.655669] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:54.949 [2024-06-10 19:09:09.663221] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1ac1710 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.949 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.208 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.208 "name": "raid_bdev1", 00:24:55.208 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:24:55.208 "strip_size_kb": 0, 00:24:55.208 "state": "online", 00:24:55.208 "raid_level": "raid1", 00:24:55.208 "superblock": true, 00:24:55.208 "num_base_bdevs": 2, 00:24:55.208 "num_base_bdevs_discovered": 1, 00:24:55.208 "num_base_bdevs_operational": 1, 00:24:55.208 "base_bdevs_list": [ 00:24:55.208 { 00:24:55.208 "name": null, 00:24:55.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.208 "is_configured": false, 00:24:55.208 "data_offset": 2048, 00:24:55.208 "data_size": 63488 00:24:55.208 }, 00:24:55.208 { 00:24:55.208 "name": "BaseBdev2", 00:24:55.208 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:24:55.208 "is_configured": true, 00:24:55.208 "data_offset": 2048, 00:24:55.208 "data_size": 63488 00:24:55.208 } 00:24:55.208 ] 00:24:55.208 }' 00:24:55.209 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.209 19:09:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.775 19:09:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:56.034 [2024-06-10 19:09:10.684421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.034 [2024-06-10 19:09:10.723417] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19bcb70 00:24:56.034 19:09:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:56.034 [2024-06-10 19:09:10.725594] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:56.293 [2024-06-10 19:09:10.835120] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:56.293 [2024-06-10 19:09:10.835471] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:56.552 [2024-06-10 19:09:11.086984] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:56.811 [2024-06-10 19:09:11.422868] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:57.070 [2024-06-10 19:09:11.639873] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:57.070 [2024-06-10 19:09:11.640079] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.070 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.329 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.329 "name": "raid_bdev1", 00:24:57.329 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:24:57.329 "strip_size_kb": 0, 00:24:57.329 "state": "online", 00:24:57.329 "raid_level": "raid1", 00:24:57.329 "superblock": true, 00:24:57.329 "num_base_bdevs": 2, 00:24:57.329 "num_base_bdevs_discovered": 2, 00:24:57.329 "num_base_bdevs_operational": 2, 00:24:57.329 "process": { 00:24:57.329 "type": "rebuild", 00:24:57.329 "target": "spare", 00:24:57.329 "progress": { 00:24:57.329 "blocks": 10240, 00:24:57.329 "percent": 16 00:24:57.329 } 00:24:57.329 }, 00:24:57.329 "base_bdevs_list": [ 00:24:57.329 { 00:24:57.329 "name": "spare", 00:24:57.329 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:24:57.329 "is_configured": true, 00:24:57.329 "data_offset": 2048, 00:24:57.329 "data_size": 63488 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "name": "BaseBdev2", 00:24:57.329 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:24:57.329 "is_configured": true, 00:24:57.329 "data_offset": 2048, 00:24:57.329 "data_size": 63488 00:24:57.329 } 00:24:57.329 ] 00:24:57.329 }' 00:24:57.329 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.329 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.329 19:09:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.329 [2024-06-10 19:09:12.000342] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:57.329 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.329 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:57.588 [2024-06-10 19:09:12.203280] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:57.589 [2024-06-10 19:09:12.212164] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.589 [2024-06-10 19:09:12.319768] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:57.589 [2024-06-10 19:09:12.328787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.589 [2024-06-10 19:09:12.328811] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.589 [2024-06-10 19:09:12.328819] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:57.589 [2024-06-10 19:09:12.341738] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1ac1710 00:24:57.848 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.849 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.108 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.108 "name": "raid_bdev1", 00:24:58.108 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:24:58.108 "strip_size_kb": 0, 00:24:58.108 "state": "online", 00:24:58.108 "raid_level": "raid1", 00:24:58.108 "superblock": true, 00:24:58.108 "num_base_bdevs": 2, 00:24:58.108 "num_base_bdevs_discovered": 1, 00:24:58.108 "num_base_bdevs_operational": 1, 00:24:58.108 "base_bdevs_list": [ 00:24:58.108 { 00:24:58.108 "name": null, 00:24:58.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.108 "is_configured": false, 00:24:58.108 "data_offset": 2048, 00:24:58.108 "data_size": 63488 00:24:58.108 }, 00:24:58.108 { 00:24:58.108 "name": "BaseBdev2", 00:24:58.108 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:24:58.108 "is_configured": true, 00:24:58.108 "data_offset": 2048, 00:24:58.108 "data_size": 63488 00:24:58.108 } 00:24:58.109 ] 00:24:58.109 }' 00:24:58.109 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.109 19:09:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.677 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:58.677 "name": "raid_bdev1", 00:24:58.677 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:24:58.677 "strip_size_kb": 0, 00:24:58.677 "state": "online", 00:24:58.677 "raid_level": "raid1", 00:24:58.677 "superblock": true, 00:24:58.677 "num_base_bdevs": 2, 00:24:58.677 "num_base_bdevs_discovered": 1, 00:24:58.677 "num_base_bdevs_operational": 1, 00:24:58.677 "base_bdevs_list": [ 00:24:58.677 { 00:24:58.677 "name": null, 00:24:58.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.677 "is_configured": false, 00:24:58.677 "data_offset": 2048, 00:24:58.677 "data_size": 63488 00:24:58.677 }, 00:24:58.677 { 00:24:58.677 "name": "BaseBdev2", 00:24:58.677 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:24:58.677 "is_configured": true, 00:24:58.677 "data_offset": 2048, 00:24:58.677 "data_size": 63488 00:24:58.677 } 00:24:58.677 ] 00:24:58.677 }' 00:24:58.937 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:58.937 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:58.937 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:58.937 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:58.937 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:59.196 [2024-06-10 19:09:13.717096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:59.196 [2024-06-10 19:09:13.770956] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x191ee70 00:24:59.196 [2024-06-10 19:09:13.772326] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.196 19:09:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:59.196 [2024-06-10 19:09:13.899692] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:59.196 [2024-06-10 19:09:13.899954] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:59.455 [2024-06-10 19:09:14.008564] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:59.455 [2024-06-10 19:09:14.008678] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:59.714 [2024-06-10 19:09:14.351402] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:59.714 [2024-06-10 19:09:14.351709] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:59.976 [2024-06-10 19:09:14.569128] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:59.976 [2024-06-10 19:09:14.569254] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.331 19:09:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.331 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.331 "name": "raid_bdev1", 00:25:00.331 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:00.331 "strip_size_kb": 0, 00:25:00.331 "state": "online", 00:25:00.331 "raid_level": "raid1", 00:25:00.331 "superblock": true, 00:25:00.331 "num_base_bdevs": 2, 00:25:00.331 "num_base_bdevs_discovered": 2, 00:25:00.331 "num_base_bdevs_operational": 2, 00:25:00.331 "process": { 00:25:00.331 "type": "rebuild", 00:25:00.331 "target": "spare", 00:25:00.331 "progress": { 00:25:00.331 "blocks": 14336, 00:25:00.331 "percent": 22 00:25:00.331 } 00:25:00.331 }, 00:25:00.331 "base_bdevs_list": [ 00:25:00.331 { 00:25:00.331 "name": "spare", 00:25:00.331 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:00.331 "is_configured": true, 00:25:00.331 "data_offset": 2048, 00:25:00.331 "data_size": 63488 00:25:00.331 }, 00:25:00.331 { 00:25:00.331 "name": "BaseBdev2", 00:25:00.331 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:00.331 "is_configured": true, 00:25:00.331 "data_offset": 2048, 00:25:00.331 "data_size": 63488 00:25:00.331 } 00:25:00.331 ] 00:25:00.331 }' 00:25:00.331 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.331 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.331 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:25:00.591 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=780 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.591 [2024-06-10 19:09:15.110276] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.591 "name": "raid_bdev1", 00:25:00.591 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:00.591 "strip_size_kb": 0, 00:25:00.591 "state": "online", 00:25:00.591 "raid_level": "raid1", 00:25:00.591 "superblock": true, 00:25:00.591 "num_base_bdevs": 2, 00:25:00.591 "num_base_bdevs_discovered": 2, 00:25:00.591 "num_base_bdevs_operational": 2, 00:25:00.591 "process": { 00:25:00.591 "type": "rebuild", 00:25:00.591 "target": "spare", 00:25:00.591 "progress": { 00:25:00.591 "blocks": 18432, 00:25:00.591 "percent": 29 00:25:00.591 } 00:25:00.591 }, 00:25:00.591 "base_bdevs_list": [ 00:25:00.591 { 00:25:00.591 "name": "spare", 00:25:00.591 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:00.591 "is_configured": true, 00:25:00.591 "data_offset": 2048, 00:25:00.591 "data_size": 63488 00:25:00.591 }, 00:25:00.591 { 00:25:00.591 "name": "BaseBdev2", 00:25:00.591 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:00.591 "is_configured": true, 00:25:00.591 "data_offset": 2048, 00:25:00.591 "data_size": 63488 00:25:00.591 } 00:25:00.591 ] 00:25:00.591 }' 00:25:00.591 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.852 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.852 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.852 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.852 19:09:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:00.852 [2024-06-10 19:09:15.470166] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.790 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.790 [2024-06-10 19:09:16.469799] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:02.049 [2024-06-10 19:09:16.587824] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:25:02.049 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:02.049 "name": "raid_bdev1", 00:25:02.049 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:02.049 "strip_size_kb": 0, 00:25:02.049 "state": "online", 00:25:02.049 "raid_level": "raid1", 00:25:02.049 "superblock": true, 00:25:02.049 "num_base_bdevs": 2, 00:25:02.049 "num_base_bdevs_discovered": 2, 00:25:02.049 "num_base_bdevs_operational": 2, 00:25:02.049 "process": { 00:25:02.049 "type": "rebuild", 00:25:02.049 "target": "spare", 00:25:02.049 "progress": { 00:25:02.049 "blocks": 40960, 00:25:02.049 "percent": 64 00:25:02.049 } 00:25:02.049 }, 00:25:02.049 "base_bdevs_list": [ 00:25:02.049 { 00:25:02.049 "name": "spare", 00:25:02.049 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:02.049 "is_configured": true, 00:25:02.049 "data_offset": 2048, 00:25:02.049 "data_size": 63488 00:25:02.049 }, 00:25:02.049 { 00:25:02.049 "name": "BaseBdev2", 00:25:02.049 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:02.049 "is_configured": true, 00:25:02.049 "data_offset": 2048, 00:25:02.049 "data_size": 63488 00:25:02.049 } 00:25:02.049 ] 00:25:02.049 }' 00:25:02.049 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:02.049 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:02.049 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:02.049 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:02.049 19:09:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:02.617 [2024-06-10 19:09:17.148423] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:25:02.617 [2024-06-10 19:09:17.266566] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:03.186 [2024-06-10 19:09:17.702593] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.186 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.186 [2024-06-10 19:09:17.928770] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:03.445 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.445 "name": "raid_bdev1", 00:25:03.445 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:03.445 "strip_size_kb": 0, 00:25:03.445 "state": "online", 00:25:03.445 "raid_level": "raid1", 00:25:03.445 "superblock": true, 00:25:03.445 "num_base_bdevs": 2, 00:25:03.445 "num_base_bdevs_discovered": 2, 00:25:03.445 "num_base_bdevs_operational": 2, 00:25:03.445 "process": { 00:25:03.445 "type": "rebuild", 00:25:03.445 "target": "spare", 00:25:03.445 "progress": { 00:25:03.445 "blocks": 63488, 00:25:03.445 "percent": 100 00:25:03.445 } 00:25:03.445 }, 00:25:03.445 "base_bdevs_list": [ 00:25:03.445 { 00:25:03.445 "name": "spare", 00:25:03.445 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:03.445 "is_configured": true, 00:25:03.445 "data_offset": 2048, 00:25:03.445 "data_size": 63488 00:25:03.445 }, 00:25:03.445 { 00:25:03.445 "name": "BaseBdev2", 00:25:03.445 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:03.445 "is_configured": true, 00:25:03.445 "data_offset": 2048, 00:25:03.445 "data_size": 63488 00:25:03.445 } 00:25:03.445 ] 00:25:03.445 }' 00:25:03.445 19:09:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.445 [2024-06-10 19:09:18.029023] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:03.445 [2024-06-10 19:09:18.030538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.445 19:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.445 19:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.445 19:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.445 19:09:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.383 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.643 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.643 "name": "raid_bdev1", 00:25:04.643 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:04.643 "strip_size_kb": 0, 00:25:04.643 "state": "online", 00:25:04.643 "raid_level": "raid1", 00:25:04.643 "superblock": true, 00:25:04.643 "num_base_bdevs": 2, 00:25:04.643 "num_base_bdevs_discovered": 2, 00:25:04.643 "num_base_bdevs_operational": 2, 00:25:04.643 "base_bdevs_list": [ 00:25:04.643 { 00:25:04.643 "name": "spare", 00:25:04.643 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:04.643 "is_configured": true, 00:25:04.643 "data_offset": 2048, 00:25:04.643 "data_size": 63488 00:25:04.643 }, 00:25:04.643 { 00:25:04.643 "name": "BaseBdev2", 00:25:04.643 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:04.643 "is_configured": true, 00:25:04.643 "data_offset": 2048, 00:25:04.643 "data_size": 63488 00:25:04.643 } 00:25:04.643 ] 00:25:04.643 }' 00:25:04.643 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:04.643 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:04.643 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.903 "name": "raid_bdev1", 00:25:04.903 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:04.903 "strip_size_kb": 0, 00:25:04.903 "state": "online", 00:25:04.903 "raid_level": "raid1", 00:25:04.903 "superblock": true, 00:25:04.903 "num_base_bdevs": 2, 00:25:04.903 "num_base_bdevs_discovered": 2, 00:25:04.903 "num_base_bdevs_operational": 2, 00:25:04.903 "base_bdevs_list": [ 00:25:04.903 { 00:25:04.903 "name": "spare", 00:25:04.903 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:04.903 "is_configured": true, 00:25:04.903 "data_offset": 2048, 00:25:04.903 "data_size": 63488 00:25:04.903 }, 00:25:04.903 { 00:25:04.903 "name": "BaseBdev2", 00:25:04.903 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:04.903 "is_configured": true, 00:25:04.903 "data_offset": 2048, 00:25:04.903 "data_size": 63488 00:25:04.903 } 00:25:04.903 ] 00:25:04.903 }' 00:25:04.903 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.162 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.421 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.421 "name": "raid_bdev1", 00:25:05.421 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:05.421 "strip_size_kb": 0, 00:25:05.422 "state": "online", 00:25:05.422 "raid_level": "raid1", 00:25:05.422 "superblock": true, 00:25:05.422 "num_base_bdevs": 2, 00:25:05.422 "num_base_bdevs_discovered": 2, 00:25:05.422 "num_base_bdevs_operational": 2, 00:25:05.422 "base_bdevs_list": [ 00:25:05.422 { 00:25:05.422 "name": "spare", 00:25:05.422 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:05.422 "is_configured": true, 00:25:05.422 "data_offset": 2048, 00:25:05.422 "data_size": 63488 00:25:05.422 }, 00:25:05.422 { 00:25:05.422 "name": "BaseBdev2", 00:25:05.422 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:05.422 "is_configured": true, 00:25:05.422 "data_offset": 2048, 00:25:05.422 "data_size": 63488 00:25:05.422 } 00:25:05.422 ] 00:25:05.422 }' 00:25:05.422 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.422 19:09:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.991 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:05.991 [2024-06-10 19:09:20.737145] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.991 [2024-06-10 19:09:20.737174] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:06.250 00:25:06.250 Latency(us) 00:25:06.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.250 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:06.250 raid_bdev1 : 11.19 97.66 292.99 0.00 0.00 14417.03 275.25 116601.65 00:25:06.250 =================================================================================================================== 00:25:06.250 Total : 97.66 292.99 0.00 0.00 14417.03 275.25 116601.65 00:25:06.250 [2024-06-10 19:09:20.764839] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.250 [2024-06-10 19:09:20.764863] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:06.250 [2024-06-10 19:09:20.764927] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:06.250 [2024-06-10 19:09:20.764937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1921d90 name raid_bdev1, state offline 00:25:06.250 0 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.250 19:09:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:06.509 /dev/nbd0 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:06.509 1+0 records in 00:25:06.509 1+0 records out 00:25:06.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167033 s, 24.5 MB/s 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:06.509 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.510 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:25:06.769 /dev/nbd1 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:06.769 1+0 records in 00:25:06.769 1+0 records out 00:25:06.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273497 s, 15.0 MB/s 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.769 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:07.028 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:07.288 19:09:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:07.547 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:25:07.548 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:07.548 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:07.808 [2024-06-10 19:09:22.498901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:07.808 [2024-06-10 19:09:22.498942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.808 [2024-06-10 19:09:22.498964] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19bdaf0 00:25:07.808 [2024-06-10 19:09:22.498975] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.808 [2024-06-10 19:09:22.500484] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.808 [2024-06-10 19:09:22.500511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:07.808 [2024-06-10 19:09:22.500595] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:07.808 [2024-06-10 19:09:22.500619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:07.808 [2024-06-10 19:09:22.500713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.808 spare 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.808 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.068 [2024-06-10 19:09:22.601019] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ac0850 00:25:08.068 [2024-06-10 19:09:22.601032] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:08.068 [2024-06-10 19:09:22.601193] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1927ed0 00:25:08.068 [2024-06-10 19:09:22.601319] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ac0850 00:25:08.068 [2024-06-10 19:09:22.601328] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ac0850 00:25:08.068 [2024-06-10 19:09:22.601422] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.068 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.068 "name": "raid_bdev1", 00:25:08.068 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:08.068 "strip_size_kb": 0, 00:25:08.068 "state": "online", 00:25:08.068 "raid_level": "raid1", 00:25:08.068 "superblock": true, 00:25:08.068 "num_base_bdevs": 2, 00:25:08.068 "num_base_bdevs_discovered": 2, 00:25:08.068 "num_base_bdevs_operational": 2, 00:25:08.068 "base_bdevs_list": [ 00:25:08.068 { 00:25:08.068 "name": "spare", 00:25:08.068 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:08.068 "is_configured": true, 00:25:08.068 "data_offset": 2048, 00:25:08.068 "data_size": 63488 00:25:08.068 }, 00:25:08.068 { 00:25:08.068 "name": "BaseBdev2", 00:25:08.068 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:08.068 "is_configured": true, 00:25:08.068 "data_offset": 2048, 00:25:08.068 "data_size": 63488 00:25:08.068 } 00:25:08.068 ] 00:25:08.068 }' 00:25:08.068 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.068 19:09:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.636 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.895 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.895 "name": "raid_bdev1", 00:25:08.895 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:08.895 "strip_size_kb": 0, 00:25:08.895 "state": "online", 00:25:08.895 "raid_level": "raid1", 00:25:08.895 "superblock": true, 00:25:08.895 "num_base_bdevs": 2, 00:25:08.895 "num_base_bdevs_discovered": 2, 00:25:08.895 "num_base_bdevs_operational": 2, 00:25:08.895 "base_bdevs_list": [ 00:25:08.895 { 00:25:08.895 "name": "spare", 00:25:08.895 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:08.896 "is_configured": true, 00:25:08.896 "data_offset": 2048, 00:25:08.896 "data_size": 63488 00:25:08.896 }, 00:25:08.896 { 00:25:08.896 "name": "BaseBdev2", 00:25:08.896 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:08.896 "is_configured": true, 00:25:08.896 "data_offset": 2048, 00:25:08.896 "data_size": 63488 00:25:08.896 } 00:25:08.896 ] 00:25:08.896 }' 00:25:08.896 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:08.896 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:08.896 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:08.896 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:09.155 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:09.155 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.155 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:25:09.155 19:09:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:09.414 [2024-06-10 19:09:24.083361] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.414 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.674 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.674 "name": "raid_bdev1", 00:25:09.674 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:09.674 "strip_size_kb": 0, 00:25:09.674 "state": "online", 00:25:09.674 "raid_level": "raid1", 00:25:09.674 "superblock": true, 00:25:09.674 "num_base_bdevs": 2, 00:25:09.674 "num_base_bdevs_discovered": 1, 00:25:09.674 "num_base_bdevs_operational": 1, 00:25:09.674 "base_bdevs_list": [ 00:25:09.674 { 00:25:09.674 "name": null, 00:25:09.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.674 "is_configured": false, 00:25:09.674 "data_offset": 2048, 00:25:09.674 "data_size": 63488 00:25:09.674 }, 00:25:09.674 { 00:25:09.674 "name": "BaseBdev2", 00:25:09.674 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:09.674 "is_configured": true, 00:25:09.674 "data_offset": 2048, 00:25:09.674 "data_size": 63488 00:25:09.674 } 00:25:09.674 ] 00:25:09.674 }' 00:25:09.674 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.674 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.242 19:09:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:10.501 [2024-06-10 19:09:25.114285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.501 [2024-06-10 19:09:25.114411] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:10.501 [2024-06-10 19:09:25.114426] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:10.501 [2024-06-10 19:09:25.114453] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.501 [2024-06-10 19:09:25.119537] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16251c0 00:25:10.501 [2024-06-10 19:09:25.121780] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:10.501 19:09:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.438 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.696 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:11.696 "name": "raid_bdev1", 00:25:11.696 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:11.696 "strip_size_kb": 0, 00:25:11.696 "state": "online", 00:25:11.696 "raid_level": "raid1", 00:25:11.696 "superblock": true, 00:25:11.696 "num_base_bdevs": 2, 00:25:11.696 "num_base_bdevs_discovered": 2, 00:25:11.696 "num_base_bdevs_operational": 2, 00:25:11.696 "process": { 00:25:11.696 "type": "rebuild", 00:25:11.696 "target": "spare", 00:25:11.696 "progress": { 00:25:11.696 "blocks": 24576, 00:25:11.696 "percent": 38 00:25:11.696 } 00:25:11.696 }, 00:25:11.696 "base_bdevs_list": [ 00:25:11.697 { 00:25:11.697 "name": "spare", 00:25:11.697 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:11.697 "is_configured": true, 00:25:11.697 "data_offset": 2048, 00:25:11.697 "data_size": 63488 00:25:11.697 }, 00:25:11.697 { 00:25:11.697 "name": "BaseBdev2", 00:25:11.697 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:11.697 "is_configured": true, 00:25:11.697 "data_offset": 2048, 00:25:11.697 "data_size": 63488 00:25:11.697 } 00:25:11.697 ] 00:25:11.697 }' 00:25:11.697 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:11.697 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:11.697 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:11.955 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:11.955 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:11.955 [2024-06-10 19:09:26.660889] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.214 [2024-06-10 19:09:26.733473] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:12.214 [2024-06-10 19:09:26.733515] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.214 [2024-06-10 19:09:26.733529] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.214 [2024-06-10 19:09:26.733537] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.214 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.473 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:12.473 "name": "raid_bdev1", 00:25:12.473 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:12.473 "strip_size_kb": 0, 00:25:12.473 "state": "online", 00:25:12.473 "raid_level": "raid1", 00:25:12.473 "superblock": true, 00:25:12.473 "num_base_bdevs": 2, 00:25:12.473 "num_base_bdevs_discovered": 1, 00:25:12.473 "num_base_bdevs_operational": 1, 00:25:12.473 "base_bdevs_list": [ 00:25:12.473 { 00:25:12.473 "name": null, 00:25:12.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.473 "is_configured": false, 00:25:12.473 "data_offset": 2048, 00:25:12.473 "data_size": 63488 00:25:12.473 }, 00:25:12.473 { 00:25:12.473 "name": "BaseBdev2", 00:25:12.473 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:12.473 "is_configured": true, 00:25:12.473 "data_offset": 2048, 00:25:12.473 "data_size": 63488 00:25:12.473 } 00:25:12.473 ] 00:25:12.473 }' 00:25:12.473 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:12.473 19:09:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:13.042 19:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:13.042 [2024-06-10 19:09:27.764740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:13.042 [2024-06-10 19:09:27.764784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.042 [2024-06-10 19:09:27.764804] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1acb270 00:25:13.042 [2024-06-10 19:09:27.764815] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.042 [2024-06-10 19:09:27.765142] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.042 [2024-06-10 19:09:27.765158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:13.042 [2024-06-10 19:09:27.765229] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:13.042 [2024-06-10 19:09:27.765240] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:13.042 [2024-06-10 19:09:27.765249] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:13.042 [2024-06-10 19:09:27.765267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:13.042 [2024-06-10 19:09:27.770308] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1acb500 00:25:13.042 spare 00:25:13.042 [2024-06-10 19:09:27.771666] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.042 19:09:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.421 19:09:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.421 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.421 "name": "raid_bdev1", 00:25:14.421 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:14.421 "strip_size_kb": 0, 00:25:14.421 "state": "online", 00:25:14.421 "raid_level": "raid1", 00:25:14.421 "superblock": true, 00:25:14.421 "num_base_bdevs": 2, 00:25:14.421 "num_base_bdevs_discovered": 2, 00:25:14.421 "num_base_bdevs_operational": 2, 00:25:14.421 "process": { 00:25:14.421 "type": "rebuild", 00:25:14.421 "target": "spare", 00:25:14.421 "progress": { 00:25:14.421 "blocks": 24576, 00:25:14.421 "percent": 38 00:25:14.421 } 00:25:14.421 }, 00:25:14.421 "base_bdevs_list": [ 00:25:14.421 { 00:25:14.421 "name": "spare", 00:25:14.421 "uuid": "3afa9bae-ade3-5187-baee-a668655da164", 00:25:14.421 "is_configured": true, 00:25:14.421 "data_offset": 2048, 00:25:14.421 "data_size": 63488 00:25:14.421 }, 00:25:14.421 { 00:25:14.421 "name": "BaseBdev2", 00:25:14.421 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:14.421 "is_configured": true, 00:25:14.421 "data_offset": 2048, 00:25:14.421 "data_size": 63488 00:25:14.421 } 00:25:14.421 ] 00:25:14.421 }' 00:25:14.421 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:14.421 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:14.421 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:14.421 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:14.421 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:14.681 [2024-06-10 19:09:29.319273] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:14.681 [2024-06-10 19:09:29.383415] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:14.681 [2024-06-10 19:09:29.383456] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.681 [2024-06-10 19:09:29.383470] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:14.681 [2024-06-10 19:09:29.383477] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.681 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.941 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.941 "name": "raid_bdev1", 00:25:14.941 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:14.941 "strip_size_kb": 0, 00:25:14.941 "state": "online", 00:25:14.941 "raid_level": "raid1", 00:25:14.941 "superblock": true, 00:25:14.941 "num_base_bdevs": 2, 00:25:14.941 "num_base_bdevs_discovered": 1, 00:25:14.941 "num_base_bdevs_operational": 1, 00:25:14.941 "base_bdevs_list": [ 00:25:14.941 { 00:25:14.941 "name": null, 00:25:14.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.941 "is_configured": false, 00:25:14.941 "data_offset": 2048, 00:25:14.941 "data_size": 63488 00:25:14.941 }, 00:25:14.941 { 00:25:14.941 "name": "BaseBdev2", 00:25:14.941 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:14.941 "is_configured": true, 00:25:14.941 "data_offset": 2048, 00:25:14.941 "data_size": 63488 00:25:14.941 } 00:25:14.941 ] 00:25:14.941 }' 00:25:14.941 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.941 19:09:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.510 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.769 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.769 "name": "raid_bdev1", 00:25:15.769 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:15.769 "strip_size_kb": 0, 00:25:15.769 "state": "online", 00:25:15.769 "raid_level": "raid1", 00:25:15.769 "superblock": true, 00:25:15.769 "num_base_bdevs": 2, 00:25:15.769 "num_base_bdevs_discovered": 1, 00:25:15.769 "num_base_bdevs_operational": 1, 00:25:15.769 "base_bdevs_list": [ 00:25:15.769 { 00:25:15.769 "name": null, 00:25:15.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.769 "is_configured": false, 00:25:15.769 "data_offset": 2048, 00:25:15.769 "data_size": 63488 00:25:15.769 }, 00:25:15.769 { 00:25:15.769 "name": "BaseBdev2", 00:25:15.769 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:15.769 "is_configured": true, 00:25:15.769 "data_offset": 2048, 00:25:15.769 "data_size": 63488 00:25:15.769 } 00:25:15.769 ] 00:25:15.769 }' 00:25:15.769 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:15.769 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:15.769 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:16.028 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:16.028 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:16.028 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:16.288 [2024-06-10 19:09:30.968206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:16.288 [2024-06-10 19:09:30.968249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.288 [2024-06-10 19:09:30.968269] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19200d0 00:25:16.288 [2024-06-10 19:09:30.968281] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.288 [2024-06-10 19:09:30.968597] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.288 [2024-06-10 19:09:30.968614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:16.288 [2024-06-10 19:09:30.968672] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:16.288 [2024-06-10 19:09:30.968682] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:16.288 [2024-06-10 19:09:30.968692] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:16.288 BaseBdev1 00:25:16.288 19:09:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.666 19:09:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.666 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.666 "name": "raid_bdev1", 00:25:17.666 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:17.666 "strip_size_kb": 0, 00:25:17.666 "state": "online", 00:25:17.666 "raid_level": "raid1", 00:25:17.666 "superblock": true, 00:25:17.666 "num_base_bdevs": 2, 00:25:17.666 "num_base_bdevs_discovered": 1, 00:25:17.666 "num_base_bdevs_operational": 1, 00:25:17.666 "base_bdevs_list": [ 00:25:17.666 { 00:25:17.666 "name": null, 00:25:17.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.666 "is_configured": false, 00:25:17.666 "data_offset": 2048, 00:25:17.666 "data_size": 63488 00:25:17.666 }, 00:25:17.666 { 00:25:17.666 "name": "BaseBdev2", 00:25:17.666 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:17.666 "is_configured": true, 00:25:17.666 "data_offset": 2048, 00:25:17.666 "data_size": 63488 00:25:17.666 } 00:25:17.667 ] 00:25:17.667 }' 00:25:17.667 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.667 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.233 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.233 "name": "raid_bdev1", 00:25:18.233 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:18.233 "strip_size_kb": 0, 00:25:18.233 "state": "online", 00:25:18.233 "raid_level": "raid1", 00:25:18.233 "superblock": true, 00:25:18.233 "num_base_bdevs": 2, 00:25:18.233 "num_base_bdevs_discovered": 1, 00:25:18.233 "num_base_bdevs_operational": 1, 00:25:18.233 "base_bdevs_list": [ 00:25:18.233 { 00:25:18.233 "name": null, 00:25:18.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.233 "is_configured": false, 00:25:18.233 "data_offset": 2048, 00:25:18.233 "data_size": 63488 00:25:18.233 }, 00:25:18.233 { 00:25:18.233 "name": "BaseBdev2", 00:25:18.233 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:18.233 "is_configured": true, 00:25:18.233 "data_offset": 2048, 00:25:18.233 "data_size": 63488 00:25:18.233 } 00:25:18.233 ] 00:25:18.233 }' 00:25:18.491 19:09:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # local es=0 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:18.491 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:18.749 [2024-06-10 19:09:33.290636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.749 [2024-06-10 19:09:33.290745] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:18.749 [2024-06-10 19:09:33.290760] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:18.749 request: 00:25:18.749 { 00:25:18.749 "raid_bdev": "raid_bdev1", 00:25:18.749 "base_bdev": "BaseBdev1", 00:25:18.749 "method": "bdev_raid_add_base_bdev", 00:25:18.749 "req_id": 1 00:25:18.749 } 00:25:18.749 Got JSON-RPC error response 00:25:18.749 response: 00:25:18.749 { 00:25:18.749 "code": -22, 00:25:18.749 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:18.749 } 00:25:18.749 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # es=1 00:25:18.749 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:18.749 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:18.749 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:18.749 19:09:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.686 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.946 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.946 "name": "raid_bdev1", 00:25:19.946 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:19.946 "strip_size_kb": 0, 00:25:19.946 "state": "online", 00:25:19.946 "raid_level": "raid1", 00:25:19.946 "superblock": true, 00:25:19.946 "num_base_bdevs": 2, 00:25:19.946 "num_base_bdevs_discovered": 1, 00:25:19.946 "num_base_bdevs_operational": 1, 00:25:19.946 "base_bdevs_list": [ 00:25:19.946 { 00:25:19.946 "name": null, 00:25:19.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.946 "is_configured": false, 00:25:19.946 "data_offset": 2048, 00:25:19.946 "data_size": 63488 00:25:19.946 }, 00:25:19.946 { 00:25:19.946 "name": "BaseBdev2", 00:25:19.946 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:19.946 "is_configured": true, 00:25:19.946 "data_offset": 2048, 00:25:19.946 "data_size": 63488 00:25:19.946 } 00:25:19.946 ] 00:25:19.946 }' 00:25:19.946 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.946 19:09:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.514 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:20.773 "name": "raid_bdev1", 00:25:20.773 "uuid": "90eca4f5-7e39-4dda-8e8d-f09cf38bdac3", 00:25:20.773 "strip_size_kb": 0, 00:25:20.773 "state": "online", 00:25:20.773 "raid_level": "raid1", 00:25:20.773 "superblock": true, 00:25:20.773 "num_base_bdevs": 2, 00:25:20.773 "num_base_bdevs_discovered": 1, 00:25:20.773 "num_base_bdevs_operational": 1, 00:25:20.773 "base_bdevs_list": [ 00:25:20.773 { 00:25:20.773 "name": null, 00:25:20.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.773 "is_configured": false, 00:25:20.773 "data_offset": 2048, 00:25:20.773 "data_size": 63488 00:25:20.773 }, 00:25:20.773 { 00:25:20.773 "name": "BaseBdev2", 00:25:20.773 "uuid": "eae7cf9c-7f8c-5125-9348-6f9987b8ffdd", 00:25:20.773 "is_configured": true, 00:25:20.773 "data_offset": 2048, 00:25:20.773 "data_size": 63488 00:25:20.773 } 00:25:20.773 ] 00:25:20.773 }' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 1758548 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@949 -- # '[' -z 1758548 ']' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # kill -0 1758548 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # uname 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1758548 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1758548' 00:25:20.773 killing process with pid 1758548 00:25:20.773 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # kill 1758548 00:25:20.773 Received shutdown signal, test time was about 25.893972 seconds 00:25:20.773 00:25:20.773 Latency(us) 00:25:20.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.773 =================================================================================================================== 00:25:20.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.773 [2024-06-10 19:09:35.499425] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:20.774 [2024-06-10 19:09:35.499505] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.774 [2024-06-10 19:09:35.499544] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.774 [2024-06-10 19:09:35.499554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ac0850 name raid_bdev1, state offline 00:25:20.774 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # wait 1758548 00:25:20.774 [2024-06-10 19:09:35.518934] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:21.033 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:25:21.033 00:25:21.033 real 0m30.229s 00:25:21.033 user 0m46.839s 00:25:21.033 sys 0m4.381s 00:25:21.033 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:21.033 19:09:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:21.033 ************************************ 00:25:21.033 END TEST raid_rebuild_test_sb_io 00:25:21.033 ************************************ 00:25:21.033 19:09:35 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:25:21.033 19:09:35 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:25:21.033 19:09:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:25:21.033 19:09:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:21.033 19:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:21.292 ************************************ 00:25:21.292 START TEST raid_rebuild_test 00:25:21.292 ************************************ 00:25:21.292 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 false false true 00:25:21.292 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=1764251 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 1764251 /var/tmp/spdk-raid.sock 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@830 -- # '[' -z 1764251 ']' 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:21.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:21.293 19:09:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.293 [2024-06-10 19:09:35.864778] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:25:21.293 [2024-06-10 19:09:35.864833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764251 ] 00:25:21.293 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:21.293 Zero copy mechanism will not be used. 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.0 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.1 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.2 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.3 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.4 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.5 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.6 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:01.7 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.0 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.1 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.2 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.3 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.4 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.5 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.6 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b6:02.7 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.0 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.1 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.2 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.3 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.4 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.5 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.6 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:01.7 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.0 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.1 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.2 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.3 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.4 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.5 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.6 cannot be used 00:25:21.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.293 EAL: Requested device 0000:b8:02.7 cannot be used 00:25:21.293 [2024-06-10 19:09:35.994696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.552 [2024-06-10 19:09:36.081902] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.552 [2024-06-10 19:09:36.144713] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:21.552 [2024-06-10 19:09:36.144750] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.119 19:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:22.119 19:09:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@863 -- # return 0 00:25:22.119 19:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:22.119 19:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:22.389 BaseBdev1_malloc 00:25:22.389 19:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:22.663 [2024-06-10 19:09:37.189357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:22.663 [2024-06-10 19:09:37.189401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.663 [2024-06-10 19:09:37.189421] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ec200 00:25:22.663 [2024-06-10 19:09:37.189432] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.663 [2024-06-10 19:09:37.190949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.663 [2024-06-10 19:09:37.190973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:22.663 BaseBdev1 00:25:22.663 19:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:22.663 19:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:22.663 BaseBdev2_malloc 00:25:22.921 19:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:22.921 [2024-06-10 19:09:37.634916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:22.921 [2024-06-10 19:09:37.634956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.921 [2024-06-10 19:09:37.634974] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2383d90 00:25:22.921 [2024-06-10 19:09:37.634985] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.921 [2024-06-10 19:09:37.636400] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.921 [2024-06-10 19:09:37.636425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:22.921 BaseBdev2 00:25:22.922 19:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:22.922 19:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:23.180 BaseBdev3_malloc 00:25:23.180 19:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:23.439 [2024-06-10 19:09:38.092361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:23.439 [2024-06-10 19:09:38.092400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.439 [2024-06-10 19:09:38.092418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2386540 00:25:23.439 [2024-06-10 19:09:38.092429] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.439 [2024-06-10 19:09:38.093781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.439 [2024-06-10 19:09:38.093806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:23.439 BaseBdev3 00:25:23.439 19:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:23.439 19:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:23.698 BaseBdev4_malloc 00:25:23.698 19:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:23.958 [2024-06-10 19:09:38.549916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:23.958 [2024-06-10 19:09:38.549956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.958 [2024-06-10 19:09:38.549973] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2386b40 00:25:23.958 [2024-06-10 19:09:38.549984] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.958 [2024-06-10 19:09:38.551344] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.958 [2024-06-10 19:09:38.551369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:23.958 BaseBdev4 00:25:23.958 19:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:24.217 spare_malloc 00:25:24.217 19:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:24.480 spare_delay 00:25:24.480 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:24.480 [2024-06-10 19:09:39.220045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.480 [2024-06-10 19:09:39.220085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.480 [2024-06-10 19:09:39.220105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21e6f50 00:25:24.480 [2024-06-10 19:09:39.220117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.480 [2024-06-10 19:09:39.221557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.480 [2024-06-10 19:09:39.221591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.480 spare 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:24.738 [2024-06-10 19:09:39.440648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.738 [2024-06-10 19:09:39.441802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:24.738 [2024-06-10 19:09:39.441853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:24.738 [2024-06-10 19:09:39.441893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:24.738 [2024-06-10 19:09:39.441964] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21e7a70 00:25:24.738 [2024-06-10 19:09:39.441974] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:24.738 [2024-06-10 19:09:39.442162] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21ebed0 00:25:24.738 [2024-06-10 19:09:39.442302] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21e7a70 00:25:24.738 [2024-06-10 19:09:39.442312] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21e7a70 00:25:24.738 [2024-06-10 19:09:39.442413] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.738 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.997 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.997 "name": "raid_bdev1", 00:25:24.997 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:24.997 "strip_size_kb": 0, 00:25:24.997 "state": "online", 00:25:24.997 "raid_level": "raid1", 00:25:24.997 "superblock": false, 00:25:24.997 "num_base_bdevs": 4, 00:25:24.997 "num_base_bdevs_discovered": 4, 00:25:24.997 "num_base_bdevs_operational": 4, 00:25:24.997 "base_bdevs_list": [ 00:25:24.997 { 00:25:24.997 "name": "BaseBdev1", 00:25:24.997 "uuid": "fcbbc989-55fa-5fa2-9d11-6ea943747b30", 00:25:24.997 "is_configured": true, 00:25:24.997 "data_offset": 0, 00:25:24.997 "data_size": 65536 00:25:24.997 }, 00:25:24.997 { 00:25:24.997 "name": "BaseBdev2", 00:25:24.997 "uuid": "9f68955f-4e20-5740-b01a-fd65e3773281", 00:25:24.997 "is_configured": true, 00:25:24.997 "data_offset": 0, 00:25:24.997 "data_size": 65536 00:25:24.997 }, 00:25:24.997 { 00:25:24.997 "name": "BaseBdev3", 00:25:24.997 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:24.997 "is_configured": true, 00:25:24.997 "data_offset": 0, 00:25:24.997 "data_size": 65536 00:25:24.997 }, 00:25:24.997 { 00:25:24.997 "name": "BaseBdev4", 00:25:24.997 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:24.997 "is_configured": true, 00:25:24.997 "data_offset": 0, 00:25:24.997 "data_size": 65536 00:25:24.997 } 00:25:24.997 ] 00:25:24.997 }' 00:25:24.997 19:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.997 19:09:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.565 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:25.565 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:25.824 [2024-06-10 19:09:40.467603] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.824 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:25:25.824 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.824 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.083 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:26.342 [2024-06-10 19:09:40.928602] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21ebed0 00:25:26.342 /dev/nbd0 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:25:26.342 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:26.343 1+0 records in 00:25:26.343 1+0 records out 00:25:26.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226154 s, 18.1 MB/s 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:26.343 19:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:25:34.465 65536+0 records in 00:25:34.465 65536+0 records out 00:25:34.465 33554432 bytes (34 MB, 32 MiB) copied, 7.08376 s, 4.7 MB/s 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:34.465 [2024-06-10 19:09:48.315225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:34.465 [2024-06-10 19:09:48.539850] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.465 "name": "raid_bdev1", 00:25:34.465 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:34.465 "strip_size_kb": 0, 00:25:34.465 "state": "online", 00:25:34.465 "raid_level": "raid1", 00:25:34.465 "superblock": false, 00:25:34.465 "num_base_bdevs": 4, 00:25:34.465 "num_base_bdevs_discovered": 3, 00:25:34.465 "num_base_bdevs_operational": 3, 00:25:34.465 "base_bdevs_list": [ 00:25:34.465 { 00:25:34.465 "name": null, 00:25:34.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.465 "is_configured": false, 00:25:34.465 "data_offset": 0, 00:25:34.465 "data_size": 65536 00:25:34.465 }, 00:25:34.465 { 00:25:34.465 "name": "BaseBdev2", 00:25:34.465 "uuid": "9f68955f-4e20-5740-b01a-fd65e3773281", 00:25:34.465 "is_configured": true, 00:25:34.465 "data_offset": 0, 00:25:34.465 "data_size": 65536 00:25:34.465 }, 00:25:34.465 { 00:25:34.465 "name": "BaseBdev3", 00:25:34.465 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:34.465 "is_configured": true, 00:25:34.465 "data_offset": 0, 00:25:34.465 "data_size": 65536 00:25:34.465 }, 00:25:34.465 { 00:25:34.465 "name": "BaseBdev4", 00:25:34.465 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:34.465 "is_configured": true, 00:25:34.465 "data_offset": 0, 00:25:34.465 "data_size": 65536 00:25:34.465 } 00:25:34.465 ] 00:25:34.465 }' 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.465 19:09:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.725 19:09:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:34.984 [2024-06-10 19:09:49.566560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:34.984 [2024-06-10 19:09:49.570436] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21eb2b0 00:25:34.984 [2024-06-10 19:09:49.572509] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:34.984 19:09:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:35.921 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.921 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:35.921 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:35.922 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:35.922 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:35.922 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.922 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.181 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.181 "name": "raid_bdev1", 00:25:36.181 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:36.181 "strip_size_kb": 0, 00:25:36.181 "state": "online", 00:25:36.181 "raid_level": "raid1", 00:25:36.181 "superblock": false, 00:25:36.181 "num_base_bdevs": 4, 00:25:36.181 "num_base_bdevs_discovered": 4, 00:25:36.181 "num_base_bdevs_operational": 4, 00:25:36.181 "process": { 00:25:36.181 "type": "rebuild", 00:25:36.181 "target": "spare", 00:25:36.181 "progress": { 00:25:36.181 "blocks": 24576, 00:25:36.181 "percent": 37 00:25:36.181 } 00:25:36.181 }, 00:25:36.181 "base_bdevs_list": [ 00:25:36.181 { 00:25:36.181 "name": "spare", 00:25:36.181 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:36.181 "is_configured": true, 00:25:36.181 "data_offset": 0, 00:25:36.181 "data_size": 65536 00:25:36.181 }, 00:25:36.181 { 00:25:36.181 "name": "BaseBdev2", 00:25:36.181 "uuid": "9f68955f-4e20-5740-b01a-fd65e3773281", 00:25:36.181 "is_configured": true, 00:25:36.181 "data_offset": 0, 00:25:36.181 "data_size": 65536 00:25:36.181 }, 00:25:36.181 { 00:25:36.181 "name": "BaseBdev3", 00:25:36.181 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:36.181 "is_configured": true, 00:25:36.181 "data_offset": 0, 00:25:36.181 "data_size": 65536 00:25:36.181 }, 00:25:36.181 { 00:25:36.181 "name": "BaseBdev4", 00:25:36.181 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:36.181 "is_configured": true, 00:25:36.181 "data_offset": 0, 00:25:36.181 "data_size": 65536 00:25:36.181 } 00:25:36.181 ] 00:25:36.181 }' 00:25:36.181 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:36.181 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.181 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.181 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.181 19:09:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:36.440 [2024-06-10 19:09:51.113941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:36.440 [2024-06-10 19:09:51.184285] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:36.440 [2024-06-10 19:09:51.184332] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.440 [2024-06-10 19:09:51.184348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:36.440 [2024-06-10 19:09:51.184356] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.700 "name": "raid_bdev1", 00:25:36.700 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:36.700 "strip_size_kb": 0, 00:25:36.700 "state": "online", 00:25:36.700 "raid_level": "raid1", 00:25:36.700 "superblock": false, 00:25:36.700 "num_base_bdevs": 4, 00:25:36.700 "num_base_bdevs_discovered": 3, 00:25:36.700 "num_base_bdevs_operational": 3, 00:25:36.700 "base_bdevs_list": [ 00:25:36.700 { 00:25:36.700 "name": null, 00:25:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.700 "is_configured": false, 00:25:36.700 "data_offset": 0, 00:25:36.700 "data_size": 65536 00:25:36.700 }, 00:25:36.700 { 00:25:36.700 "name": "BaseBdev2", 00:25:36.700 "uuid": "9f68955f-4e20-5740-b01a-fd65e3773281", 00:25:36.700 "is_configured": true, 00:25:36.700 "data_offset": 0, 00:25:36.700 "data_size": 65536 00:25:36.700 }, 00:25:36.700 { 00:25:36.700 "name": "BaseBdev3", 00:25:36.700 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:36.700 "is_configured": true, 00:25:36.700 "data_offset": 0, 00:25:36.700 "data_size": 65536 00:25:36.700 }, 00:25:36.700 { 00:25:36.700 "name": "BaseBdev4", 00:25:36.700 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:36.700 "is_configured": true, 00:25:36.700 "data_offset": 0, 00:25:36.700 "data_size": 65536 00:25:36.700 } 00:25:36.700 ] 00:25:36.700 }' 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.700 19:09:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.268 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.528 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:37.528 "name": "raid_bdev1", 00:25:37.528 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:37.528 "strip_size_kb": 0, 00:25:37.528 "state": "online", 00:25:37.528 "raid_level": "raid1", 00:25:37.528 "superblock": false, 00:25:37.528 "num_base_bdevs": 4, 00:25:37.528 "num_base_bdevs_discovered": 3, 00:25:37.528 "num_base_bdevs_operational": 3, 00:25:37.528 "base_bdevs_list": [ 00:25:37.528 { 00:25:37.528 "name": null, 00:25:37.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.528 "is_configured": false, 00:25:37.528 "data_offset": 0, 00:25:37.528 "data_size": 65536 00:25:37.528 }, 00:25:37.528 { 00:25:37.528 "name": "BaseBdev2", 00:25:37.528 "uuid": "9f68955f-4e20-5740-b01a-fd65e3773281", 00:25:37.528 "is_configured": true, 00:25:37.528 "data_offset": 0, 00:25:37.528 "data_size": 65536 00:25:37.528 }, 00:25:37.528 { 00:25:37.528 "name": "BaseBdev3", 00:25:37.528 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:37.528 "is_configured": true, 00:25:37.528 "data_offset": 0, 00:25:37.528 "data_size": 65536 00:25:37.528 }, 00:25:37.528 { 00:25:37.528 "name": "BaseBdev4", 00:25:37.528 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:37.528 "is_configured": true, 00:25:37.528 "data_offset": 0, 00:25:37.528 "data_size": 65536 00:25:37.528 } 00:25:37.528 ] 00:25:37.528 }' 00:25:37.528 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:37.528 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:37.787 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:37.787 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:37.787 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:37.787 [2024-06-10 19:09:52.535595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:37.787 [2024-06-10 19:09:52.539506] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21ebb20 00:25:37.787 [2024-06-10 19:09:52.540905] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:38.046 19:09:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.984 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.244 "name": "raid_bdev1", 00:25:39.244 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:39.244 "strip_size_kb": 0, 00:25:39.244 "state": "online", 00:25:39.244 "raid_level": "raid1", 00:25:39.244 "superblock": false, 00:25:39.244 "num_base_bdevs": 4, 00:25:39.244 "num_base_bdevs_discovered": 4, 00:25:39.244 "num_base_bdevs_operational": 4, 00:25:39.244 "process": { 00:25:39.244 "type": "rebuild", 00:25:39.244 "target": "spare", 00:25:39.244 "progress": { 00:25:39.244 "blocks": 24576, 00:25:39.244 "percent": 37 00:25:39.244 } 00:25:39.244 }, 00:25:39.244 "base_bdevs_list": [ 00:25:39.244 { 00:25:39.244 "name": "spare", 00:25:39.244 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:39.244 "is_configured": true, 00:25:39.244 "data_offset": 0, 00:25:39.244 "data_size": 65536 00:25:39.244 }, 00:25:39.244 { 00:25:39.244 "name": "BaseBdev2", 00:25:39.244 "uuid": "9f68955f-4e20-5740-b01a-fd65e3773281", 00:25:39.244 "is_configured": true, 00:25:39.244 "data_offset": 0, 00:25:39.244 "data_size": 65536 00:25:39.244 }, 00:25:39.244 { 00:25:39.244 "name": "BaseBdev3", 00:25:39.244 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:39.244 "is_configured": true, 00:25:39.244 "data_offset": 0, 00:25:39.244 "data_size": 65536 00:25:39.244 }, 00:25:39.244 { 00:25:39.244 "name": "BaseBdev4", 00:25:39.244 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:39.244 "is_configured": true, 00:25:39.244 "data_offset": 0, 00:25:39.244 "data_size": 65536 00:25:39.244 } 00:25:39.244 ] 00:25:39.244 }' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:25:39.244 19:09:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:39.503 [2024-06-10 19:09:54.077841] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:39.503 [2024-06-10 19:09:54.152546] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x21ebb20 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.503 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.762 "name": "raid_bdev1", 00:25:39.762 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:39.762 "strip_size_kb": 0, 00:25:39.762 "state": "online", 00:25:39.762 "raid_level": "raid1", 00:25:39.762 "superblock": false, 00:25:39.762 "num_base_bdevs": 4, 00:25:39.762 "num_base_bdevs_discovered": 3, 00:25:39.762 "num_base_bdevs_operational": 3, 00:25:39.762 "process": { 00:25:39.762 "type": "rebuild", 00:25:39.762 "target": "spare", 00:25:39.762 "progress": { 00:25:39.762 "blocks": 36864, 00:25:39.762 "percent": 56 00:25:39.762 } 00:25:39.762 }, 00:25:39.762 "base_bdevs_list": [ 00:25:39.762 { 00:25:39.762 "name": "spare", 00:25:39.762 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:39.762 "is_configured": true, 00:25:39.762 "data_offset": 0, 00:25:39.762 "data_size": 65536 00:25:39.762 }, 00:25:39.762 { 00:25:39.762 "name": null, 00:25:39.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.762 "is_configured": false, 00:25:39.762 "data_offset": 0, 00:25:39.762 "data_size": 65536 00:25:39.762 }, 00:25:39.762 { 00:25:39.762 "name": "BaseBdev3", 00:25:39.762 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:39.762 "is_configured": true, 00:25:39.762 "data_offset": 0, 00:25:39.762 "data_size": 65536 00:25:39.762 }, 00:25:39.762 { 00:25:39.762 "name": "BaseBdev4", 00:25:39.762 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:39.762 "is_configured": true, 00:25:39.762 "data_offset": 0, 00:25:39.762 "data_size": 65536 00:25:39.762 } 00:25:39.762 ] 00:25:39.762 }' 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=819 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.762 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:39.763 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:39.763 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.763 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.763 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.021 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:40.021 "name": "raid_bdev1", 00:25:40.021 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:40.021 "strip_size_kb": 0, 00:25:40.021 "state": "online", 00:25:40.021 "raid_level": "raid1", 00:25:40.021 "superblock": false, 00:25:40.021 "num_base_bdevs": 4, 00:25:40.021 "num_base_bdevs_discovered": 3, 00:25:40.021 "num_base_bdevs_operational": 3, 00:25:40.021 "process": { 00:25:40.021 "type": "rebuild", 00:25:40.021 "target": "spare", 00:25:40.021 "progress": { 00:25:40.021 "blocks": 43008, 00:25:40.021 "percent": 65 00:25:40.021 } 00:25:40.021 }, 00:25:40.021 "base_bdevs_list": [ 00:25:40.021 { 00:25:40.021 "name": "spare", 00:25:40.021 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:40.021 "is_configured": true, 00:25:40.021 "data_offset": 0, 00:25:40.021 "data_size": 65536 00:25:40.021 }, 00:25:40.021 { 00:25:40.021 "name": null, 00:25:40.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.021 "is_configured": false, 00:25:40.021 "data_offset": 0, 00:25:40.021 "data_size": 65536 00:25:40.021 }, 00:25:40.021 { 00:25:40.021 "name": "BaseBdev3", 00:25:40.021 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:40.021 "is_configured": true, 00:25:40.021 "data_offset": 0, 00:25:40.021 "data_size": 65536 00:25:40.021 }, 00:25:40.021 { 00:25:40.021 "name": "BaseBdev4", 00:25:40.021 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:40.021 "is_configured": true, 00:25:40.021 "data_offset": 0, 00:25:40.021 "data_size": 65536 00:25:40.021 } 00:25:40.021 ] 00:25:40.021 }' 00:25:40.021 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:40.021 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.021 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:40.279 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.279 19:09:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:41.217 [2024-06-10 19:09:55.764191] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:41.217 [2024-06-10 19:09:55.764243] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:41.217 [2024-06-10 19:09:55.764277] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.217 19:09:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.478 "name": "raid_bdev1", 00:25:41.478 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:41.478 "strip_size_kb": 0, 00:25:41.478 "state": "online", 00:25:41.478 "raid_level": "raid1", 00:25:41.478 "superblock": false, 00:25:41.478 "num_base_bdevs": 4, 00:25:41.478 "num_base_bdevs_discovered": 3, 00:25:41.478 "num_base_bdevs_operational": 3, 00:25:41.478 "base_bdevs_list": [ 00:25:41.478 { 00:25:41.478 "name": "spare", 00:25:41.478 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:41.478 "is_configured": true, 00:25:41.478 "data_offset": 0, 00:25:41.478 "data_size": 65536 00:25:41.478 }, 00:25:41.478 { 00:25:41.478 "name": null, 00:25:41.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.478 "is_configured": false, 00:25:41.478 "data_offset": 0, 00:25:41.478 "data_size": 65536 00:25:41.478 }, 00:25:41.478 { 00:25:41.478 "name": "BaseBdev3", 00:25:41.478 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:41.478 "is_configured": true, 00:25:41.478 "data_offset": 0, 00:25:41.478 "data_size": 65536 00:25:41.478 }, 00:25:41.478 { 00:25:41.478 "name": "BaseBdev4", 00:25:41.478 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:41.478 "is_configured": true, 00:25:41.478 "data_offset": 0, 00:25:41.478 "data_size": 65536 00:25:41.478 } 00:25:41.478 ] 00:25:41.478 }' 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.478 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.738 "name": "raid_bdev1", 00:25:41.738 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:41.738 "strip_size_kb": 0, 00:25:41.738 "state": "online", 00:25:41.738 "raid_level": "raid1", 00:25:41.738 "superblock": false, 00:25:41.738 "num_base_bdevs": 4, 00:25:41.738 "num_base_bdevs_discovered": 3, 00:25:41.738 "num_base_bdevs_operational": 3, 00:25:41.738 "base_bdevs_list": [ 00:25:41.738 { 00:25:41.738 "name": "spare", 00:25:41.738 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:41.738 "is_configured": true, 00:25:41.738 "data_offset": 0, 00:25:41.738 "data_size": 65536 00:25:41.738 }, 00:25:41.738 { 00:25:41.738 "name": null, 00:25:41.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.738 "is_configured": false, 00:25:41.738 "data_offset": 0, 00:25:41.738 "data_size": 65536 00:25:41.738 }, 00:25:41.738 { 00:25:41.738 "name": "BaseBdev3", 00:25:41.738 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:41.738 "is_configured": true, 00:25:41.738 "data_offset": 0, 00:25:41.738 "data_size": 65536 00:25:41.738 }, 00:25:41.738 { 00:25:41.738 "name": "BaseBdev4", 00:25:41.738 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:41.738 "is_configured": true, 00:25:41.738 "data_offset": 0, 00:25:41.738 "data_size": 65536 00:25:41.738 } 00:25:41.738 ] 00:25:41.738 }' 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.738 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.997 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.997 "name": "raid_bdev1", 00:25:41.997 "uuid": "b175b0a5-7a80-48ff-9067-f4fe305f5348", 00:25:41.997 "strip_size_kb": 0, 00:25:41.997 "state": "online", 00:25:41.997 "raid_level": "raid1", 00:25:41.997 "superblock": false, 00:25:41.997 "num_base_bdevs": 4, 00:25:41.997 "num_base_bdevs_discovered": 3, 00:25:41.997 "num_base_bdevs_operational": 3, 00:25:41.997 "base_bdevs_list": [ 00:25:41.997 { 00:25:41.997 "name": "spare", 00:25:41.997 "uuid": "37d017a2-9bf8-52e8-a713-5dd07eaf0236", 00:25:41.997 "is_configured": true, 00:25:41.997 "data_offset": 0, 00:25:41.997 "data_size": 65536 00:25:41.997 }, 00:25:41.997 { 00:25:41.997 "name": null, 00:25:41.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.997 "is_configured": false, 00:25:41.997 "data_offset": 0, 00:25:41.997 "data_size": 65536 00:25:41.997 }, 00:25:41.997 { 00:25:41.997 "name": "BaseBdev3", 00:25:41.997 "uuid": "914e2fec-96b2-550f-a180-d7338039a93a", 00:25:41.997 "is_configured": true, 00:25:41.997 "data_offset": 0, 00:25:41.997 "data_size": 65536 00:25:41.997 }, 00:25:41.997 { 00:25:41.997 "name": "BaseBdev4", 00:25:41.998 "uuid": "5361f909-23bd-575d-a921-31d35fb56b73", 00:25:41.998 "is_configured": true, 00:25:41.998 "data_offset": 0, 00:25:41.998 "data_size": 65536 00:25:41.998 } 00:25:41.998 ] 00:25:41.998 }' 00:25:41.998 19:09:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.998 19:09:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.566 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:42.825 [2024-06-10 19:09:57.449020] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:42.825 [2024-06-10 19:09:57.449045] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.825 [2024-06-10 19:09:57.449095] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.825 [2024-06-10 19:09:57.449160] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.825 [2024-06-10 19:09:57.449171] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21e7a70 name raid_bdev1, state offline 00:25:42.825 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.825 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.084 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:43.344 /dev/nbd0 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:43.344 1+0 records in 00:25:43.344 1+0 records out 00:25:43.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025386 s, 16.1 MB/s 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.344 19:09:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:43.604 /dev/nbd1 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local i 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # break 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:43.604 1+0 records in 00:25:43.604 1+0 records out 00:25:43.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253025 s, 16.2 MB/s 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # size=4096 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # return 0 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.604 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.863 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 1764251 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@949 -- # '[' -z 1764251 ']' 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # kill -0 1764251 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # uname 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1764251 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1764251' 00:25:44.123 killing process with pid 1764251 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # kill 1764251 00:25:44.123 Received shutdown signal, test time was about 60.000000 seconds 00:25:44.123 00:25:44.123 Latency(us) 00:25:44.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.123 =================================================================================================================== 00:25:44.123 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:44.123 [2024-06-10 19:09:58.876635] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:44.123 19:09:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # wait 1764251 00:25:44.381 [2024-06-10 19:09:58.916818] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:44.381 19:09:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:25:44.381 00:25:44.381 real 0m23.312s 00:25:44.381 user 0m31.304s 00:25:44.381 sys 0m4.946s 00:25:44.381 19:09:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:44.381 19:09:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.381 ************************************ 00:25:44.381 END TEST raid_rebuild_test 00:25:44.381 ************************************ 00:25:44.641 19:09:59 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:25:44.641 19:09:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:25:44.641 19:09:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:44.641 19:09:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:44.641 ************************************ 00:25:44.641 START TEST raid_rebuild_test_sb 00:25:44.641 ************************************ 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 true false true 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:44.641 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=1768489 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 1768489 /var/tmp/spdk-raid.sock 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@830 -- # '[' -z 1768489 ']' 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:44.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:44.642 19:09:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.642 [2024-06-10 19:09:59.265426] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:25:44.642 [2024-06-10 19:09:59.265482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768489 ] 00:25:44.642 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:44.642 Zero copy mechanism will not be used. 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.0 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.1 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.2 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.3 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.4 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.5 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.6 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:01.7 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.0 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.1 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.2 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.3 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.4 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.5 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.6 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b6:02.7 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.0 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.1 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.2 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.3 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.4 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.5 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.6 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:01.7 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.0 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.1 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.2 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.3 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.4 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.5 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.6 cannot be used 00:25:44.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:44.642 EAL: Requested device 0000:b8:02.7 cannot be used 00:25:44.902 [2024-06-10 19:09:59.398521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.902 [2024-06-10 19:09:59.485204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.902 [2024-06-10 19:09:59.541854] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.902 [2024-06-10 19:09:59.541888] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.471 19:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:45.471 19:10:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@863 -- # return 0 00:25:45.471 19:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:45.471 19:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:45.730 BaseBdev1_malloc 00:25:45.730 19:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:45.989 [2024-06-10 19:10:00.610196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:45.989 [2024-06-10 19:10:00.610236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.989 [2024-06-10 19:10:00.610256] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1beb200 00:25:45.989 [2024-06-10 19:10:00.610267] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.989 [2024-06-10 19:10:00.611781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.989 [2024-06-10 19:10:00.611807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:45.989 BaseBdev1 00:25:45.989 19:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:45.989 19:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:46.247 BaseBdev2_malloc 00:25:46.247 19:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:46.506 [2024-06-10 19:10:01.043677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:46.506 [2024-06-10 19:10:01.043714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.506 [2024-06-10 19:10:01.043731] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d82d90 00:25:46.506 [2024-06-10 19:10:01.043742] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.506 [2024-06-10 19:10:01.045104] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.506 [2024-06-10 19:10:01.045131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:46.506 BaseBdev2 00:25:46.506 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:46.506 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:46.765 BaseBdev3_malloc 00:25:46.765 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:46.765 [2024-06-10 19:10:01.501266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:46.765 [2024-06-10 19:10:01.501306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.765 [2024-06-10 19:10:01.501323] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d85540 00:25:46.765 [2024-06-10 19:10:01.501334] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.765 [2024-06-10 19:10:01.502680] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.765 [2024-06-10 19:10:01.502706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:46.765 BaseBdev3 00:25:46.765 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:46.765 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:47.024 BaseBdev4_malloc 00:25:47.024 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:47.284 [2024-06-10 19:10:01.958823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:47.284 [2024-06-10 19:10:01.958861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.284 [2024-06-10 19:10:01.958878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d85b40 00:25:47.284 [2024-06-10 19:10:01.958889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.284 [2024-06-10 19:10:01.960235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.284 [2024-06-10 19:10:01.960261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:47.284 BaseBdev4 00:25:47.284 19:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:47.543 spare_malloc 00:25:47.543 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:47.802 spare_delay 00:25:47.802 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:48.061 [2024-06-10 19:10:02.632890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:48.061 [2024-06-10 19:10:02.632929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.061 [2024-06-10 19:10:02.632948] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be5f50 00:25:48.061 [2024-06-10 19:10:02.632960] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.061 [2024-06-10 19:10:02.634353] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.061 [2024-06-10 19:10:02.634380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:48.061 spare 00:25:48.061 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:48.321 [2024-06-10 19:10:02.853505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:48.321 [2024-06-10 19:10:02.854656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:48.321 [2024-06-10 19:10:02.854706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:48.321 [2024-06-10 19:10:02.854746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:48.321 [2024-06-10 19:10:02.854922] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1be6a70 00:25:48.321 [2024-06-10 19:10:02.854933] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:48.321 [2024-06-10 19:10:02.855113] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1beaed0 00:25:48.321 [2024-06-10 19:10:02.855247] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1be6a70 00:25:48.321 [2024-06-10 19:10:02.855257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1be6a70 00:25:48.321 [2024-06-10 19:10:02.855344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.321 19:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.580 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.580 "name": "raid_bdev1", 00:25:48.580 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:25:48.580 "strip_size_kb": 0, 00:25:48.580 "state": "online", 00:25:48.580 "raid_level": "raid1", 00:25:48.580 "superblock": true, 00:25:48.580 "num_base_bdevs": 4, 00:25:48.580 "num_base_bdevs_discovered": 4, 00:25:48.580 "num_base_bdevs_operational": 4, 00:25:48.580 "base_bdevs_list": [ 00:25:48.580 { 00:25:48.580 "name": "BaseBdev1", 00:25:48.580 "uuid": "92809d28-96b6-58ed-b6b7-0fbf48b7fe3a", 00:25:48.580 "is_configured": true, 00:25:48.580 "data_offset": 2048, 00:25:48.580 "data_size": 63488 00:25:48.580 }, 00:25:48.580 { 00:25:48.580 "name": "BaseBdev2", 00:25:48.580 "uuid": "806ea6e5-1dae-5348-ab66-684aac9f64f4", 00:25:48.580 "is_configured": true, 00:25:48.580 "data_offset": 2048, 00:25:48.580 "data_size": 63488 00:25:48.580 }, 00:25:48.580 { 00:25:48.580 "name": "BaseBdev3", 00:25:48.580 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:25:48.580 "is_configured": true, 00:25:48.580 "data_offset": 2048, 00:25:48.580 "data_size": 63488 00:25:48.580 }, 00:25:48.580 { 00:25:48.580 "name": "BaseBdev4", 00:25:48.580 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:25:48.580 "is_configured": true, 00:25:48.580 "data_offset": 2048, 00:25:48.580 "data_size": 63488 00:25:48.580 } 00:25:48.580 ] 00:25:48.580 }' 00:25:48.580 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.580 19:10:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.176 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.176 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:49.176 [2024-06-10 19:10:03.864382] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.176 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:25:49.176 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:49.176 19:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:49.477 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:49.736 [2024-06-10 19:10:04.269233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1beaed0 00:25:49.737 /dev/nbd0 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:49.737 1+0 records in 00:25:49.737 1+0 records out 00:25:49.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259615 s, 15.8 MB/s 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:49.737 19:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:25:57.850 63488+0 records in 00:25:57.850 63488+0 records out 00:25:57.850 32505856 bytes (33 MB, 31 MiB) copied, 6.79835 s, 4.8 MB/s 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:57.850 [2024-06-10 19:10:11.377264] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:57.850 [2024-06-10 19:10:11.592904] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.850 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.851 "name": "raid_bdev1", 00:25:57.851 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:25:57.851 "strip_size_kb": 0, 00:25:57.851 "state": "online", 00:25:57.851 "raid_level": "raid1", 00:25:57.851 "superblock": true, 00:25:57.851 "num_base_bdevs": 4, 00:25:57.851 "num_base_bdevs_discovered": 3, 00:25:57.851 "num_base_bdevs_operational": 3, 00:25:57.851 "base_bdevs_list": [ 00:25:57.851 { 00:25:57.851 "name": null, 00:25:57.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.851 "is_configured": false, 00:25:57.851 "data_offset": 2048, 00:25:57.851 "data_size": 63488 00:25:57.851 }, 00:25:57.851 { 00:25:57.851 "name": "BaseBdev2", 00:25:57.851 "uuid": "806ea6e5-1dae-5348-ab66-684aac9f64f4", 00:25:57.851 "is_configured": true, 00:25:57.851 "data_offset": 2048, 00:25:57.851 "data_size": 63488 00:25:57.851 }, 00:25:57.851 { 00:25:57.851 "name": "BaseBdev3", 00:25:57.851 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:25:57.851 "is_configured": true, 00:25:57.851 "data_offset": 2048, 00:25:57.851 "data_size": 63488 00:25:57.851 }, 00:25:57.851 { 00:25:57.851 "name": "BaseBdev4", 00:25:57.851 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:25:57.851 "is_configured": true, 00:25:57.851 "data_offset": 2048, 00:25:57.851 "data_size": 63488 00:25:57.851 } 00:25:57.851 ] 00:25:57.851 }' 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.851 19:10:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.851 19:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:58.109 [2024-06-10 19:10:12.623623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.109 [2024-06-10 19:10:12.627500] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bea8d0 00:25:58.109 [2024-06-10 19:10:12.629623] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:58.109 19:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.046 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.305 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:59.305 "name": "raid_bdev1", 00:25:59.305 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:25:59.305 "strip_size_kb": 0, 00:25:59.305 "state": "online", 00:25:59.305 "raid_level": "raid1", 00:25:59.305 "superblock": true, 00:25:59.305 "num_base_bdevs": 4, 00:25:59.305 "num_base_bdevs_discovered": 4, 00:25:59.305 "num_base_bdevs_operational": 4, 00:25:59.305 "process": { 00:25:59.305 "type": "rebuild", 00:25:59.305 "target": "spare", 00:25:59.305 "progress": { 00:25:59.305 "blocks": 24576, 00:25:59.305 "percent": 38 00:25:59.305 } 00:25:59.305 }, 00:25:59.305 "base_bdevs_list": [ 00:25:59.305 { 00:25:59.305 "name": "spare", 00:25:59.305 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:25:59.305 "is_configured": true, 00:25:59.306 "data_offset": 2048, 00:25:59.306 "data_size": 63488 00:25:59.306 }, 00:25:59.306 { 00:25:59.306 "name": "BaseBdev2", 00:25:59.306 "uuid": "806ea6e5-1dae-5348-ab66-684aac9f64f4", 00:25:59.306 "is_configured": true, 00:25:59.306 "data_offset": 2048, 00:25:59.306 "data_size": 63488 00:25:59.306 }, 00:25:59.306 { 00:25:59.306 "name": "BaseBdev3", 00:25:59.306 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:25:59.306 "is_configured": true, 00:25:59.306 "data_offset": 2048, 00:25:59.306 "data_size": 63488 00:25:59.306 }, 00:25:59.306 { 00:25:59.306 "name": "BaseBdev4", 00:25:59.306 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:25:59.306 "is_configured": true, 00:25:59.306 "data_offset": 2048, 00:25:59.306 "data_size": 63488 00:25:59.306 } 00:25:59.306 ] 00:25:59.306 }' 00:25:59.306 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:59.306 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.306 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:59.306 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:59.306 19:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:59.565 [2024-06-10 19:10:14.174598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.565 [2024-06-10 19:10:14.241285] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:59.565 [2024-06-10 19:10:14.241324] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.565 [2024-06-10 19:10:14.241340] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.565 [2024-06-10 19:10:14.241348] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.565 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.823 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.823 "name": "raid_bdev1", 00:25:59.823 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:25:59.823 "strip_size_kb": 0, 00:25:59.823 "state": "online", 00:25:59.823 "raid_level": "raid1", 00:25:59.823 "superblock": true, 00:25:59.823 "num_base_bdevs": 4, 00:25:59.823 "num_base_bdevs_discovered": 3, 00:25:59.823 "num_base_bdevs_operational": 3, 00:25:59.823 "base_bdevs_list": [ 00:25:59.823 { 00:25:59.823 "name": null, 00:25:59.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.823 "is_configured": false, 00:25:59.823 "data_offset": 2048, 00:25:59.823 "data_size": 63488 00:25:59.823 }, 00:25:59.823 { 00:25:59.823 "name": "BaseBdev2", 00:25:59.823 "uuid": "806ea6e5-1dae-5348-ab66-684aac9f64f4", 00:25:59.823 "is_configured": true, 00:25:59.823 "data_offset": 2048, 00:25:59.823 "data_size": 63488 00:25:59.823 }, 00:25:59.823 { 00:25:59.823 "name": "BaseBdev3", 00:25:59.823 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:25:59.823 "is_configured": true, 00:25:59.823 "data_offset": 2048, 00:25:59.823 "data_size": 63488 00:25:59.823 }, 00:25:59.823 { 00:25:59.823 "name": "BaseBdev4", 00:25:59.823 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:25:59.823 "is_configured": true, 00:25:59.823 "data_offset": 2048, 00:25:59.823 "data_size": 63488 00:25:59.823 } 00:25:59.823 ] 00:25:59.823 }' 00:25:59.823 19:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.823 19:10:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.391 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.651 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:00.651 "name": "raid_bdev1", 00:26:00.651 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:00.651 "strip_size_kb": 0, 00:26:00.651 "state": "online", 00:26:00.651 "raid_level": "raid1", 00:26:00.651 "superblock": true, 00:26:00.651 "num_base_bdevs": 4, 00:26:00.651 "num_base_bdevs_discovered": 3, 00:26:00.651 "num_base_bdevs_operational": 3, 00:26:00.651 "base_bdevs_list": [ 00:26:00.651 { 00:26:00.651 "name": null, 00:26:00.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.651 "is_configured": false, 00:26:00.651 "data_offset": 2048, 00:26:00.651 "data_size": 63488 00:26:00.651 }, 00:26:00.651 { 00:26:00.651 "name": "BaseBdev2", 00:26:00.651 "uuid": "806ea6e5-1dae-5348-ab66-684aac9f64f4", 00:26:00.651 "is_configured": true, 00:26:00.651 "data_offset": 2048, 00:26:00.651 "data_size": 63488 00:26:00.651 }, 00:26:00.651 { 00:26:00.651 "name": "BaseBdev3", 00:26:00.651 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:00.651 "is_configured": true, 00:26:00.651 "data_offset": 2048, 00:26:00.651 "data_size": 63488 00:26:00.651 }, 00:26:00.651 { 00:26:00.651 "name": "BaseBdev4", 00:26:00.651 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:00.651 "is_configured": true, 00:26:00.651 "data_offset": 2048, 00:26:00.651 "data_size": 63488 00:26:00.651 } 00:26:00.651 ] 00:26:00.651 }' 00:26:00.651 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:00.651 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:00.651 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:00.651 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:00.651 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:00.910 [2024-06-10 19:10:15.572665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.910 [2024-06-10 19:10:15.576562] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bea930 00:26:00.910 [2024-06-10 19:10:15.578026] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:00.910 19:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.847 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.106 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.106 "name": "raid_bdev1", 00:26:02.106 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:02.106 "strip_size_kb": 0, 00:26:02.106 "state": "online", 00:26:02.106 "raid_level": "raid1", 00:26:02.106 "superblock": true, 00:26:02.106 "num_base_bdevs": 4, 00:26:02.106 "num_base_bdevs_discovered": 4, 00:26:02.106 "num_base_bdevs_operational": 4, 00:26:02.106 "process": { 00:26:02.106 "type": "rebuild", 00:26:02.106 "target": "spare", 00:26:02.106 "progress": { 00:26:02.106 "blocks": 24576, 00:26:02.106 "percent": 38 00:26:02.106 } 00:26:02.106 }, 00:26:02.106 "base_bdevs_list": [ 00:26:02.106 { 00:26:02.106 "name": "spare", 00:26:02.106 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:02.106 "is_configured": true, 00:26:02.106 "data_offset": 2048, 00:26:02.106 "data_size": 63488 00:26:02.106 }, 00:26:02.106 { 00:26:02.106 "name": "BaseBdev2", 00:26:02.106 "uuid": "806ea6e5-1dae-5348-ab66-684aac9f64f4", 00:26:02.106 "is_configured": true, 00:26:02.106 "data_offset": 2048, 00:26:02.106 "data_size": 63488 00:26:02.106 }, 00:26:02.106 { 00:26:02.106 "name": "BaseBdev3", 00:26:02.106 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:02.106 "is_configured": true, 00:26:02.106 "data_offset": 2048, 00:26:02.106 "data_size": 63488 00:26:02.106 }, 00:26:02.106 { 00:26:02.106 "name": "BaseBdev4", 00:26:02.106 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:02.106 "is_configured": true, 00:26:02.106 "data_offset": 2048, 00:26:02.106 "data_size": 63488 00:26:02.106 } 00:26:02.106 ] 00:26:02.106 }' 00:26:02.106 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:02.366 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:26:02.366 19:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:02.625 [2024-06-10 19:10:17.123135] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:02.625 [2024-06-10 19:10:17.289949] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1bea930 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.625 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.884 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.884 "name": "raid_bdev1", 00:26:02.884 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:02.884 "strip_size_kb": 0, 00:26:02.884 "state": "online", 00:26:02.884 "raid_level": "raid1", 00:26:02.884 "superblock": true, 00:26:02.884 "num_base_bdevs": 4, 00:26:02.884 "num_base_bdevs_discovered": 3, 00:26:02.884 "num_base_bdevs_operational": 3, 00:26:02.884 "process": { 00:26:02.884 "type": "rebuild", 00:26:02.884 "target": "spare", 00:26:02.884 "progress": { 00:26:02.884 "blocks": 36864, 00:26:02.884 "percent": 58 00:26:02.884 } 00:26:02.884 }, 00:26:02.884 "base_bdevs_list": [ 00:26:02.884 { 00:26:02.884 "name": "spare", 00:26:02.884 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:02.884 "is_configured": true, 00:26:02.884 "data_offset": 2048, 00:26:02.884 "data_size": 63488 00:26:02.884 }, 00:26:02.884 { 00:26:02.885 "name": null, 00:26:02.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.885 "is_configured": false, 00:26:02.885 "data_offset": 2048, 00:26:02.885 "data_size": 63488 00:26:02.885 }, 00:26:02.885 { 00:26:02.885 "name": "BaseBdev3", 00:26:02.885 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:02.885 "is_configured": true, 00:26:02.885 "data_offset": 2048, 00:26:02.885 "data_size": 63488 00:26:02.885 }, 00:26:02.885 { 00:26:02.885 "name": "BaseBdev4", 00:26:02.885 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:02.885 "is_configured": true, 00:26:02.885 "data_offset": 2048, 00:26:02.885 "data_size": 63488 00:26:02.885 } 00:26:02.885 ] 00:26:02.885 }' 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=842 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.885 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.144 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.144 "name": "raid_bdev1", 00:26:03.144 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:03.144 "strip_size_kb": 0, 00:26:03.144 "state": "online", 00:26:03.144 "raid_level": "raid1", 00:26:03.144 "superblock": true, 00:26:03.144 "num_base_bdevs": 4, 00:26:03.144 "num_base_bdevs_discovered": 3, 00:26:03.144 "num_base_bdevs_operational": 3, 00:26:03.144 "process": { 00:26:03.144 "type": "rebuild", 00:26:03.144 "target": "spare", 00:26:03.144 "progress": { 00:26:03.144 "blocks": 43008, 00:26:03.144 "percent": 67 00:26:03.144 } 00:26:03.144 }, 00:26:03.144 "base_bdevs_list": [ 00:26:03.144 { 00:26:03.144 "name": "spare", 00:26:03.144 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:03.144 "is_configured": true, 00:26:03.144 "data_offset": 2048, 00:26:03.144 "data_size": 63488 00:26:03.144 }, 00:26:03.144 { 00:26:03.144 "name": null, 00:26:03.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.144 "is_configured": false, 00:26:03.144 "data_offset": 2048, 00:26:03.144 "data_size": 63488 00:26:03.144 }, 00:26:03.144 { 00:26:03.144 "name": "BaseBdev3", 00:26:03.144 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:03.144 "is_configured": true, 00:26:03.144 "data_offset": 2048, 00:26:03.144 "data_size": 63488 00:26:03.144 }, 00:26:03.144 { 00:26:03.144 "name": "BaseBdev4", 00:26:03.144 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:03.144 "is_configured": true, 00:26:03.144 "data_offset": 2048, 00:26:03.144 "data_size": 63488 00:26:03.144 } 00:26:03.144 ] 00:26:03.144 }' 00:26:03.145 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.145 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.145 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.404 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.404 19:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:04.346 [2024-06-10 19:10:18.800896] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:04.346 [2024-06-10 19:10:18.800954] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:04.346 [2024-06-10 19:10:18.801044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.346 19:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.606 "name": "raid_bdev1", 00:26:04.606 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:04.606 "strip_size_kb": 0, 00:26:04.606 "state": "online", 00:26:04.606 "raid_level": "raid1", 00:26:04.606 "superblock": true, 00:26:04.606 "num_base_bdevs": 4, 00:26:04.606 "num_base_bdevs_discovered": 3, 00:26:04.606 "num_base_bdevs_operational": 3, 00:26:04.606 "base_bdevs_list": [ 00:26:04.606 { 00:26:04.606 "name": "spare", 00:26:04.606 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:04.606 "is_configured": true, 00:26:04.606 "data_offset": 2048, 00:26:04.606 "data_size": 63488 00:26:04.606 }, 00:26:04.606 { 00:26:04.606 "name": null, 00:26:04.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.606 "is_configured": false, 00:26:04.606 "data_offset": 2048, 00:26:04.606 "data_size": 63488 00:26:04.606 }, 00:26:04.606 { 00:26:04.606 "name": "BaseBdev3", 00:26:04.606 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:04.606 "is_configured": true, 00:26:04.606 "data_offset": 2048, 00:26:04.606 "data_size": 63488 00:26:04.606 }, 00:26:04.606 { 00:26:04.606 "name": "BaseBdev4", 00:26:04.606 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:04.606 "is_configured": true, 00:26:04.606 "data_offset": 2048, 00:26:04.606 "data_size": 63488 00:26:04.606 } 00:26:04.606 ] 00:26:04.606 }' 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.606 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.866 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.866 "name": "raid_bdev1", 00:26:04.866 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:04.866 "strip_size_kb": 0, 00:26:04.866 "state": "online", 00:26:04.866 "raid_level": "raid1", 00:26:04.866 "superblock": true, 00:26:04.866 "num_base_bdevs": 4, 00:26:04.866 "num_base_bdevs_discovered": 3, 00:26:04.866 "num_base_bdevs_operational": 3, 00:26:04.866 "base_bdevs_list": [ 00:26:04.866 { 00:26:04.866 "name": "spare", 00:26:04.866 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:04.866 "is_configured": true, 00:26:04.866 "data_offset": 2048, 00:26:04.866 "data_size": 63488 00:26:04.866 }, 00:26:04.866 { 00:26:04.866 "name": null, 00:26:04.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.866 "is_configured": false, 00:26:04.866 "data_offset": 2048, 00:26:04.866 "data_size": 63488 00:26:04.866 }, 00:26:04.866 { 00:26:04.866 "name": "BaseBdev3", 00:26:04.866 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:04.866 "is_configured": true, 00:26:04.866 "data_offset": 2048, 00:26:04.866 "data_size": 63488 00:26:04.866 }, 00:26:04.866 { 00:26:04.866 "name": "BaseBdev4", 00:26:04.866 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:04.866 "is_configured": true, 00:26:04.866 "data_offset": 2048, 00:26:04.866 "data_size": 63488 00:26:04.866 } 00:26:04.866 ] 00:26:04.866 }' 00:26:04.866 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:04.866 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:04.866 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.867 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.126 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.126 "name": "raid_bdev1", 00:26:05.126 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:05.126 "strip_size_kb": 0, 00:26:05.126 "state": "online", 00:26:05.126 "raid_level": "raid1", 00:26:05.126 "superblock": true, 00:26:05.126 "num_base_bdevs": 4, 00:26:05.126 "num_base_bdevs_discovered": 3, 00:26:05.126 "num_base_bdevs_operational": 3, 00:26:05.126 "base_bdevs_list": [ 00:26:05.126 { 00:26:05.126 "name": "spare", 00:26:05.126 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:05.126 "is_configured": true, 00:26:05.126 "data_offset": 2048, 00:26:05.127 "data_size": 63488 00:26:05.127 }, 00:26:05.127 { 00:26:05.127 "name": null, 00:26:05.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.127 "is_configured": false, 00:26:05.127 "data_offset": 2048, 00:26:05.127 "data_size": 63488 00:26:05.127 }, 00:26:05.127 { 00:26:05.127 "name": "BaseBdev3", 00:26:05.127 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:05.127 "is_configured": true, 00:26:05.127 "data_offset": 2048, 00:26:05.127 "data_size": 63488 00:26:05.127 }, 00:26:05.127 { 00:26:05.127 "name": "BaseBdev4", 00:26:05.127 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:05.127 "is_configured": true, 00:26:05.127 "data_offset": 2048, 00:26:05.127 "data_size": 63488 00:26:05.127 } 00:26:05.127 ] 00:26:05.127 }' 00:26:05.127 19:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.127 19:10:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.697 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:05.956 [2024-06-10 19:10:20.594153] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:05.956 [2024-06-10 19:10:20.594178] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.956 [2024-06-10 19:10:20.594227] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.956 [2024-06-10 19:10:20.594289] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.956 [2024-06-10 19:10:20.594300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be6a70 name raid_bdev1, state offline 00:26:05.956 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.956 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:06.216 19:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:06.476 /dev/nbd0 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:06.476 1+0 records in 00:26:06.476 1+0 records out 00:26:06.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247698 s, 16.5 MB/s 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:06.476 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:06.736 /dev/nbd1 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local i 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # break 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:06.736 1+0 records in 00:26:06.736 1+0 records out 00:26:06.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325006 s, 12.6 MB/s 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # size=4096 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # return 0 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:06.736 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:06.737 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.737 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:06.737 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:06.737 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.737 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.997 19:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:07.257 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:07.516 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:07.775 [2024-06-10 19:10:22.454022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:07.775 [2024-06-10 19:10:22.454063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.775 [2024-06-10 19:10:22.454080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be2470 00:26:07.775 [2024-06-10 19:10:22.454092] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.775 [2024-06-10 19:10:22.455609] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.775 [2024-06-10 19:10:22.455636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:07.775 [2024-06-10 19:10:22.455706] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:07.775 [2024-06-10 19:10:22.455731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:07.775 [2024-06-10 19:10:22.455823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:07.775 [2024-06-10 19:10:22.455888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:07.775 spare 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.775 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.035 [2024-06-10 19:10:22.556198] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d841c0 00:26:08.035 [2024-06-10 19:10:22.556214] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:08.035 [2024-06-10 19:10:22.556393] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18bb010 00:26:08.035 [2024-06-10 19:10:22.556541] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d841c0 00:26:08.035 [2024-06-10 19:10:22.556551] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d841c0 00:26:08.035 [2024-06-10 19:10:22.556651] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.035 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.035 "name": "raid_bdev1", 00:26:08.035 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:08.035 "strip_size_kb": 0, 00:26:08.035 "state": "online", 00:26:08.035 "raid_level": "raid1", 00:26:08.035 "superblock": true, 00:26:08.035 "num_base_bdevs": 4, 00:26:08.035 "num_base_bdevs_discovered": 3, 00:26:08.035 "num_base_bdevs_operational": 3, 00:26:08.035 "base_bdevs_list": [ 00:26:08.035 { 00:26:08.035 "name": "spare", 00:26:08.035 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:08.035 "is_configured": true, 00:26:08.035 "data_offset": 2048, 00:26:08.035 "data_size": 63488 00:26:08.035 }, 00:26:08.035 { 00:26:08.035 "name": null, 00:26:08.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.035 "is_configured": false, 00:26:08.035 "data_offset": 2048, 00:26:08.035 "data_size": 63488 00:26:08.035 }, 00:26:08.035 { 00:26:08.035 "name": "BaseBdev3", 00:26:08.035 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:08.035 "is_configured": true, 00:26:08.035 "data_offset": 2048, 00:26:08.035 "data_size": 63488 00:26:08.035 }, 00:26:08.035 { 00:26:08.035 "name": "BaseBdev4", 00:26:08.035 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:08.035 "is_configured": true, 00:26:08.035 "data_offset": 2048, 00:26:08.035 "data_size": 63488 00:26:08.035 } 00:26:08.035 ] 00:26:08.035 }' 00:26:08.035 19:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.035 19:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.604 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.863 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:08.863 "name": "raid_bdev1", 00:26:08.863 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:08.863 "strip_size_kb": 0, 00:26:08.863 "state": "online", 00:26:08.863 "raid_level": "raid1", 00:26:08.863 "superblock": true, 00:26:08.863 "num_base_bdevs": 4, 00:26:08.863 "num_base_bdevs_discovered": 3, 00:26:08.863 "num_base_bdevs_operational": 3, 00:26:08.863 "base_bdevs_list": [ 00:26:08.863 { 00:26:08.863 "name": "spare", 00:26:08.863 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:08.863 "is_configured": true, 00:26:08.863 "data_offset": 2048, 00:26:08.863 "data_size": 63488 00:26:08.863 }, 00:26:08.863 { 00:26:08.863 "name": null, 00:26:08.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.863 "is_configured": false, 00:26:08.863 "data_offset": 2048, 00:26:08.863 "data_size": 63488 00:26:08.863 }, 00:26:08.863 { 00:26:08.863 "name": "BaseBdev3", 00:26:08.863 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:08.863 "is_configured": true, 00:26:08.863 "data_offset": 2048, 00:26:08.863 "data_size": 63488 00:26:08.863 }, 00:26:08.863 { 00:26:08.863 "name": "BaseBdev4", 00:26:08.864 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:08.864 "is_configured": true, 00:26:08.864 "data_offset": 2048, 00:26:08.864 "data_size": 63488 00:26:08.864 } 00:26:08.864 ] 00:26:08.864 }' 00:26:08.864 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:08.864 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:08.864 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:08.864 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:08.864 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:08.864 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.123 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:09.123 19:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:09.382 [2024-06-10 19:10:24.038276] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:09.382 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.383 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.642 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.642 "name": "raid_bdev1", 00:26:09.642 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:09.642 "strip_size_kb": 0, 00:26:09.642 "state": "online", 00:26:09.642 "raid_level": "raid1", 00:26:09.642 "superblock": true, 00:26:09.642 "num_base_bdevs": 4, 00:26:09.642 "num_base_bdevs_discovered": 2, 00:26:09.642 "num_base_bdevs_operational": 2, 00:26:09.642 "base_bdevs_list": [ 00:26:09.642 { 00:26:09.642 "name": null, 00:26:09.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.642 "is_configured": false, 00:26:09.642 "data_offset": 2048, 00:26:09.642 "data_size": 63488 00:26:09.642 }, 00:26:09.642 { 00:26:09.642 "name": null, 00:26:09.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.642 "is_configured": false, 00:26:09.642 "data_offset": 2048, 00:26:09.642 "data_size": 63488 00:26:09.642 }, 00:26:09.642 { 00:26:09.642 "name": "BaseBdev3", 00:26:09.642 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:09.642 "is_configured": true, 00:26:09.642 "data_offset": 2048, 00:26:09.642 "data_size": 63488 00:26:09.642 }, 00:26:09.642 { 00:26:09.642 "name": "BaseBdev4", 00:26:09.642 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:09.642 "is_configured": true, 00:26:09.642 "data_offset": 2048, 00:26:09.642 "data_size": 63488 00:26:09.642 } 00:26:09.642 ] 00:26:09.642 }' 00:26:09.642 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.642 19:10:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.211 19:10:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:10.468 [2024-06-10 19:10:25.077032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:10.468 [2024-06-10 19:10:25.077164] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:10.468 [2024-06-10 19:10:25.077179] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:10.468 [2024-06-10 19:10:25.077204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:10.468 [2024-06-10 19:10:25.080961] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be7380 00:26:10.468 [2024-06-10 19:10:25.083062] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:10.468 19:10:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.407 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.666 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:11.666 "name": "raid_bdev1", 00:26:11.666 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:11.666 "strip_size_kb": 0, 00:26:11.666 "state": "online", 00:26:11.666 "raid_level": "raid1", 00:26:11.666 "superblock": true, 00:26:11.666 "num_base_bdevs": 4, 00:26:11.666 "num_base_bdevs_discovered": 3, 00:26:11.666 "num_base_bdevs_operational": 3, 00:26:11.666 "process": { 00:26:11.666 "type": "rebuild", 00:26:11.666 "target": "spare", 00:26:11.666 "progress": { 00:26:11.666 "blocks": 24576, 00:26:11.666 "percent": 38 00:26:11.666 } 00:26:11.666 }, 00:26:11.666 "base_bdevs_list": [ 00:26:11.666 { 00:26:11.666 "name": "spare", 00:26:11.666 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:11.666 "is_configured": true, 00:26:11.666 "data_offset": 2048, 00:26:11.666 "data_size": 63488 00:26:11.666 }, 00:26:11.666 { 00:26:11.666 "name": null, 00:26:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.666 "is_configured": false, 00:26:11.666 "data_offset": 2048, 00:26:11.666 "data_size": 63488 00:26:11.666 }, 00:26:11.666 { 00:26:11.666 "name": "BaseBdev3", 00:26:11.666 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:11.666 "is_configured": true, 00:26:11.666 "data_offset": 2048, 00:26:11.666 "data_size": 63488 00:26:11.666 }, 00:26:11.666 { 00:26:11.666 "name": "BaseBdev4", 00:26:11.666 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:11.666 "is_configured": true, 00:26:11.666 "data_offset": 2048, 00:26:11.666 "data_size": 63488 00:26:11.666 } 00:26:11.666 ] 00:26:11.666 }' 00:26:11.666 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:11.667 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:11.667 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:11.667 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.667 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:11.926 [2024-06-10 19:10:26.628126] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:12.185 [2024-06-10 19:10:26.694663] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:12.185 [2024-06-10 19:10:26.694702] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.185 [2024-06-10 19:10:26.694717] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:12.185 [2024-06-10 19:10:26.694724] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.185 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.445 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.445 "name": "raid_bdev1", 00:26:12.445 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:12.445 "strip_size_kb": 0, 00:26:12.445 "state": "online", 00:26:12.445 "raid_level": "raid1", 00:26:12.445 "superblock": true, 00:26:12.445 "num_base_bdevs": 4, 00:26:12.445 "num_base_bdevs_discovered": 2, 00:26:12.445 "num_base_bdevs_operational": 2, 00:26:12.445 "base_bdevs_list": [ 00:26:12.445 { 00:26:12.445 "name": null, 00:26:12.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.445 "is_configured": false, 00:26:12.445 "data_offset": 2048, 00:26:12.445 "data_size": 63488 00:26:12.445 }, 00:26:12.445 { 00:26:12.445 "name": null, 00:26:12.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.445 "is_configured": false, 00:26:12.445 "data_offset": 2048, 00:26:12.445 "data_size": 63488 00:26:12.445 }, 00:26:12.445 { 00:26:12.446 "name": "BaseBdev3", 00:26:12.446 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:12.446 "is_configured": true, 00:26:12.446 "data_offset": 2048, 00:26:12.446 "data_size": 63488 00:26:12.446 }, 00:26:12.446 { 00:26:12.446 "name": "BaseBdev4", 00:26:12.446 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:12.446 "is_configured": true, 00:26:12.446 "data_offset": 2048, 00:26:12.446 "data_size": 63488 00:26:12.446 } 00:26:12.446 ] 00:26:12.446 }' 00:26:12.446 19:10:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.446 19:10:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.015 19:10:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:13.015 [2024-06-10 19:10:27.725108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:13.015 [2024-06-10 19:10:27.725155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.015 [2024-06-10 19:10:27.725179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d84630 00:26:13.015 [2024-06-10 19:10:27.725191] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.015 [2024-06-10 19:10:27.725531] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.015 [2024-06-10 19:10:27.725546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:13.015 [2024-06-10 19:10:27.725625] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:13.015 [2024-06-10 19:10:27.725637] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:13.015 [2024-06-10 19:10:27.725646] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:13.015 [2024-06-10 19:10:27.725664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:13.015 [2024-06-10 19:10:27.729470] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be88a0 00:26:13.015 spare 00:26:13.015 [2024-06-10 19:10:27.730904] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:13.015 19:10:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:14.394 "name": "raid_bdev1", 00:26:14.394 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:14.394 "strip_size_kb": 0, 00:26:14.394 "state": "online", 00:26:14.394 "raid_level": "raid1", 00:26:14.394 "superblock": true, 00:26:14.394 "num_base_bdevs": 4, 00:26:14.394 "num_base_bdevs_discovered": 3, 00:26:14.394 "num_base_bdevs_operational": 3, 00:26:14.394 "process": { 00:26:14.394 "type": "rebuild", 00:26:14.394 "target": "spare", 00:26:14.394 "progress": { 00:26:14.394 "blocks": 24576, 00:26:14.394 "percent": 38 00:26:14.394 } 00:26:14.394 }, 00:26:14.394 "base_bdevs_list": [ 00:26:14.394 { 00:26:14.394 "name": "spare", 00:26:14.394 "uuid": "e332bcb8-fdb0-5286-850c-a1b51aba1e73", 00:26:14.394 "is_configured": true, 00:26:14.394 "data_offset": 2048, 00:26:14.394 "data_size": 63488 00:26:14.394 }, 00:26:14.394 { 00:26:14.394 "name": null, 00:26:14.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.394 "is_configured": false, 00:26:14.394 "data_offset": 2048, 00:26:14.394 "data_size": 63488 00:26:14.394 }, 00:26:14.394 { 00:26:14.394 "name": "BaseBdev3", 00:26:14.394 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:14.394 "is_configured": true, 00:26:14.394 "data_offset": 2048, 00:26:14.394 "data_size": 63488 00:26:14.394 }, 00:26:14.394 { 00:26:14.394 "name": "BaseBdev4", 00:26:14.394 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:14.394 "is_configured": true, 00:26:14.394 "data_offset": 2048, 00:26:14.394 "data_size": 63488 00:26:14.394 } 00:26:14.394 ] 00:26:14.394 }' 00:26:14.394 19:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:14.394 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:14.394 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:14.394 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:14.394 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:14.653 [2024-06-10 19:10:29.287018] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:14.653 [2024-06-10 19:10:29.342540] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:14.653 [2024-06-10 19:10:29.342584] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.653 [2024-06-10 19:10:29.342599] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:14.654 [2024-06-10 19:10:29.342607] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.654 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.913 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.913 "name": "raid_bdev1", 00:26:14.913 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:14.913 "strip_size_kb": 0, 00:26:14.913 "state": "online", 00:26:14.913 "raid_level": "raid1", 00:26:14.913 "superblock": true, 00:26:14.913 "num_base_bdevs": 4, 00:26:14.913 "num_base_bdevs_discovered": 2, 00:26:14.913 "num_base_bdevs_operational": 2, 00:26:14.913 "base_bdevs_list": [ 00:26:14.913 { 00:26:14.913 "name": null, 00:26:14.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.913 "is_configured": false, 00:26:14.913 "data_offset": 2048, 00:26:14.913 "data_size": 63488 00:26:14.913 }, 00:26:14.913 { 00:26:14.913 "name": null, 00:26:14.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.913 "is_configured": false, 00:26:14.913 "data_offset": 2048, 00:26:14.913 "data_size": 63488 00:26:14.913 }, 00:26:14.913 { 00:26:14.913 "name": "BaseBdev3", 00:26:14.913 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:14.913 "is_configured": true, 00:26:14.913 "data_offset": 2048, 00:26:14.913 "data_size": 63488 00:26:14.913 }, 00:26:14.913 { 00:26:14.913 "name": "BaseBdev4", 00:26:14.913 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:14.913 "is_configured": true, 00:26:14.913 "data_offset": 2048, 00:26:14.913 "data_size": 63488 00:26:14.913 } 00:26:14.913 ] 00:26:14.913 }' 00:26:14.913 19:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.913 19:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.481 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.741 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:15.741 "name": "raid_bdev1", 00:26:15.741 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:15.741 "strip_size_kb": 0, 00:26:15.741 "state": "online", 00:26:15.741 "raid_level": "raid1", 00:26:15.741 "superblock": true, 00:26:15.741 "num_base_bdevs": 4, 00:26:15.741 "num_base_bdevs_discovered": 2, 00:26:15.741 "num_base_bdevs_operational": 2, 00:26:15.741 "base_bdevs_list": [ 00:26:15.741 { 00:26:15.741 "name": null, 00:26:15.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.741 "is_configured": false, 00:26:15.741 "data_offset": 2048, 00:26:15.741 "data_size": 63488 00:26:15.741 }, 00:26:15.741 { 00:26:15.741 "name": null, 00:26:15.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.741 "is_configured": false, 00:26:15.741 "data_offset": 2048, 00:26:15.741 "data_size": 63488 00:26:15.741 }, 00:26:15.741 { 00:26:15.741 "name": "BaseBdev3", 00:26:15.741 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:15.741 "is_configured": true, 00:26:15.741 "data_offset": 2048, 00:26:15.741 "data_size": 63488 00:26:15.741 }, 00:26:15.741 { 00:26:15.741 "name": "BaseBdev4", 00:26:15.741 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:15.741 "is_configured": true, 00:26:15.741 "data_offset": 2048, 00:26:15.741 "data_size": 63488 00:26:15.741 } 00:26:15.741 ] 00:26:15.741 }' 00:26:15.741 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:15.741 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:15.741 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:15.741 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:15.741 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:16.000 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:16.269 [2024-06-10 19:10:30.926549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:16.269 [2024-06-10 19:10:30.926593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.269 [2024-06-10 19:10:30.926610] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c82130 00:26:16.269 [2024-06-10 19:10:30.926621] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.269 [2024-06-10 19:10:30.926935] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.269 [2024-06-10 19:10:30.926952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:16.269 [2024-06-10 19:10:30.927007] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:16.269 [2024-06-10 19:10:30.927023] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:16.269 [2024-06-10 19:10:30.927033] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:16.269 BaseBdev1 00:26:16.269 19:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.277 19:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.537 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.537 "name": "raid_bdev1", 00:26:17.537 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:17.537 "strip_size_kb": 0, 00:26:17.537 "state": "online", 00:26:17.537 "raid_level": "raid1", 00:26:17.537 "superblock": true, 00:26:17.537 "num_base_bdevs": 4, 00:26:17.537 "num_base_bdevs_discovered": 2, 00:26:17.537 "num_base_bdevs_operational": 2, 00:26:17.537 "base_bdevs_list": [ 00:26:17.537 { 00:26:17.537 "name": null, 00:26:17.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.537 "is_configured": false, 00:26:17.537 "data_offset": 2048, 00:26:17.537 "data_size": 63488 00:26:17.537 }, 00:26:17.537 { 00:26:17.537 "name": null, 00:26:17.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.537 "is_configured": false, 00:26:17.537 "data_offset": 2048, 00:26:17.537 "data_size": 63488 00:26:17.537 }, 00:26:17.537 { 00:26:17.537 "name": "BaseBdev3", 00:26:17.537 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:17.537 "is_configured": true, 00:26:17.537 "data_offset": 2048, 00:26:17.537 "data_size": 63488 00:26:17.537 }, 00:26:17.537 { 00:26:17.537 "name": "BaseBdev4", 00:26:17.537 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:17.537 "is_configured": true, 00:26:17.537 "data_offset": 2048, 00:26:17.537 "data_size": 63488 00:26:17.537 } 00:26:17.537 ] 00:26:17.537 }' 00:26:17.537 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.537 19:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.104 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.364 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:18.364 "name": "raid_bdev1", 00:26:18.364 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:18.364 "strip_size_kb": 0, 00:26:18.364 "state": "online", 00:26:18.364 "raid_level": "raid1", 00:26:18.364 "superblock": true, 00:26:18.364 "num_base_bdevs": 4, 00:26:18.364 "num_base_bdevs_discovered": 2, 00:26:18.364 "num_base_bdevs_operational": 2, 00:26:18.364 "base_bdevs_list": [ 00:26:18.364 { 00:26:18.364 "name": null, 00:26:18.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.364 "is_configured": false, 00:26:18.364 "data_offset": 2048, 00:26:18.364 "data_size": 63488 00:26:18.364 }, 00:26:18.364 { 00:26:18.364 "name": null, 00:26:18.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.364 "is_configured": false, 00:26:18.364 "data_offset": 2048, 00:26:18.364 "data_size": 63488 00:26:18.364 }, 00:26:18.364 { 00:26:18.364 "name": "BaseBdev3", 00:26:18.364 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:18.364 "is_configured": true, 00:26:18.364 "data_offset": 2048, 00:26:18.364 "data_size": 63488 00:26:18.364 }, 00:26:18.364 { 00:26:18.364 "name": "BaseBdev4", 00:26:18.364 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:18.364 "is_configured": true, 00:26:18.364 "data_offset": 2048, 00:26:18.364 "data_size": 63488 00:26:18.364 } 00:26:18.364 ] 00:26:18.364 }' 00:26:18.364 19:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@649 -- # local es=0 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:18.364 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:18.624 [2024-06-10 19:10:33.276756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:18.624 [2024-06-10 19:10:33.276865] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:18.624 [2024-06-10 19:10:33.276880] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:18.624 request: 00:26:18.624 { 00:26:18.624 "raid_bdev": "raid_bdev1", 00:26:18.624 "base_bdev": "BaseBdev1", 00:26:18.624 "method": "bdev_raid_add_base_bdev", 00:26:18.624 "req_id": 1 00:26:18.624 } 00:26:18.624 Got JSON-RPC error response 00:26:18.624 response: 00:26:18.624 { 00:26:18.624 "code": -22, 00:26:18.624 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:18.624 } 00:26:18.624 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # es=1 00:26:18.624 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:18.624 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:18.624 19:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:18.624 19:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.563 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.823 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.823 "name": "raid_bdev1", 00:26:19.823 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:19.823 "strip_size_kb": 0, 00:26:19.823 "state": "online", 00:26:19.823 "raid_level": "raid1", 00:26:19.823 "superblock": true, 00:26:19.823 "num_base_bdevs": 4, 00:26:19.823 "num_base_bdevs_discovered": 2, 00:26:19.823 "num_base_bdevs_operational": 2, 00:26:19.823 "base_bdevs_list": [ 00:26:19.823 { 00:26:19.823 "name": null, 00:26:19.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.823 "is_configured": false, 00:26:19.823 "data_offset": 2048, 00:26:19.823 "data_size": 63488 00:26:19.823 }, 00:26:19.823 { 00:26:19.823 "name": null, 00:26:19.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.823 "is_configured": false, 00:26:19.823 "data_offset": 2048, 00:26:19.823 "data_size": 63488 00:26:19.823 }, 00:26:19.823 { 00:26:19.823 "name": "BaseBdev3", 00:26:19.823 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:19.823 "is_configured": true, 00:26:19.823 "data_offset": 2048, 00:26:19.823 "data_size": 63488 00:26:19.823 }, 00:26:19.823 { 00:26:19.823 "name": "BaseBdev4", 00:26:19.823 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:19.823 "is_configured": true, 00:26:19.823 "data_offset": 2048, 00:26:19.823 "data_size": 63488 00:26:19.823 } 00:26:19.823 ] 00:26:19.823 }' 00:26:19.824 19:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.824 19:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.393 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.652 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:20.652 "name": "raid_bdev1", 00:26:20.652 "uuid": "bccdb4a1-bba6-44eb-bcb4-6a3f836aeb49", 00:26:20.652 "strip_size_kb": 0, 00:26:20.652 "state": "online", 00:26:20.652 "raid_level": "raid1", 00:26:20.652 "superblock": true, 00:26:20.652 "num_base_bdevs": 4, 00:26:20.652 "num_base_bdevs_discovered": 2, 00:26:20.652 "num_base_bdevs_operational": 2, 00:26:20.652 "base_bdevs_list": [ 00:26:20.652 { 00:26:20.652 "name": null, 00:26:20.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.652 "is_configured": false, 00:26:20.652 "data_offset": 2048, 00:26:20.652 "data_size": 63488 00:26:20.652 }, 00:26:20.652 { 00:26:20.652 "name": null, 00:26:20.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.652 "is_configured": false, 00:26:20.652 "data_offset": 2048, 00:26:20.652 "data_size": 63488 00:26:20.652 }, 00:26:20.652 { 00:26:20.652 "name": "BaseBdev3", 00:26:20.652 "uuid": "771b16b6-1b09-5b27-a277-35f0861eeb4d", 00:26:20.652 "is_configured": true, 00:26:20.652 "data_offset": 2048, 00:26:20.652 "data_size": 63488 00:26:20.652 }, 00:26:20.652 { 00:26:20.652 "name": "BaseBdev4", 00:26:20.652 "uuid": "586b64a1-5317-554a-840f-f4eedbc4b224", 00:26:20.652 "is_configured": true, 00:26:20.652 "data_offset": 2048, 00:26:20.652 "data_size": 63488 00:26:20.652 } 00:26:20.652 ] 00:26:20.652 }' 00:26:20.652 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:20.652 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:20.652 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 1768489 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@949 -- # '[' -z 1768489 ']' 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # kill -0 1768489 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # uname 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1768489 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1768489' 00:26:20.913 killing process with pid 1768489 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # kill 1768489 00:26:20.913 Received shutdown signal, test time was about 60.000000 seconds 00:26:20.913 00:26:20.913 Latency(us) 00:26:20.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.913 =================================================================================================================== 00:26:20.913 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:20.913 [2024-06-10 19:10:35.460394] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:20.913 [2024-06-10 19:10:35.460476] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:20.913 [2024-06-10 19:10:35.460527] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:20.913 [2024-06-10 19:10:35.460539] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d841c0 name raid_bdev1, state offline 00:26:20.913 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # wait 1768489 00:26:20.913 [2024-06-10 19:10:35.500902] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:26:21.173 00:26:21.173 real 0m36.496s 00:26:21.173 user 0m52.648s 00:26:21.173 sys 0m6.487s 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.173 ************************************ 00:26:21.173 END TEST raid_rebuild_test_sb 00:26:21.173 ************************************ 00:26:21.173 19:10:35 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:26:21.173 19:10:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:26:21.173 19:10:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:21.173 19:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:21.173 ************************************ 00:26:21.173 START TEST raid_rebuild_test_io 00:26:21.173 ************************************ 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 false true true 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:21.173 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1775079 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1775079 /var/tmp/spdk-raid.sock 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@830 -- # '[' -z 1775079 ']' 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:21.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:21.174 19:10:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:21.174 [2024-06-10 19:10:35.845155] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:26:21.174 [2024-06-10 19:10:35.845209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775079 ] 00:26:21.174 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:21.174 Zero copy mechanism will not be used. 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.0 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.1 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.2 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.3 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.4 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.5 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.6 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:01.7 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.0 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.1 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.2 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.3 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.4 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.5 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.6 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b6:02.7 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.0 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.1 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.2 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.3 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.4 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.5 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.6 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:01.7 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.0 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.1 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.2 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.3 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.4 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.5 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.6 cannot be used 00:26:21.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:21.174 EAL: Requested device 0000:b8:02.7 cannot be used 00:26:21.434 [2024-06-10 19:10:35.979683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.434 [2024-06-10 19:10:36.066625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.434 [2024-06-10 19:10:36.127827] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.434 [2024-06-10 19:10:36.127865] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:22.002 19:10:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:22.002 19:10:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@863 -- # return 0 00:26:22.002 19:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:22.002 19:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:22.262 BaseBdev1_malloc 00:26:22.262 19:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:22.521 [2024-06-10 19:10:37.168554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:22.521 [2024-06-10 19:10:37.168602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.521 [2024-06-10 19:10:37.168624] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1200200 00:26:22.521 [2024-06-10 19:10:37.168636] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.521 [2024-06-10 19:10:37.170146] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.521 [2024-06-10 19:10:37.170173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:22.521 BaseBdev1 00:26:22.521 19:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:22.521 19:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:22.781 BaseBdev2_malloc 00:26:22.781 19:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:23.040 [2024-06-10 19:10:37.626286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:23.040 [2024-06-10 19:10:37.626325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.040 [2024-06-10 19:10:37.626342] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1397d90 00:26:23.040 [2024-06-10 19:10:37.626353] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.040 [2024-06-10 19:10:37.627773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.040 [2024-06-10 19:10:37.627799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:23.040 BaseBdev2 00:26:23.040 19:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:23.040 19:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:23.300 BaseBdev3_malloc 00:26:23.300 19:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:23.560 [2024-06-10 19:10:38.079826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:23.560 [2024-06-10 19:10:38.079866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.560 [2024-06-10 19:10:38.079883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139a540 00:26:23.560 [2024-06-10 19:10:38.079895] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.560 [2024-06-10 19:10:38.081235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.560 [2024-06-10 19:10:38.081261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:23.560 BaseBdev3 00:26:23.560 19:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:23.560 19:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:23.560 BaseBdev4_malloc 00:26:23.819 19:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:23.819 [2024-06-10 19:10:38.529255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:23.819 [2024-06-10 19:10:38.529295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.819 [2024-06-10 19:10:38.529313] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139ab40 00:26:23.819 [2024-06-10 19:10:38.529324] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.819 [2024-06-10 19:10:38.530722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.819 [2024-06-10 19:10:38.530748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:23.819 BaseBdev4 00:26:23.819 19:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:24.079 spare_malloc 00:26:24.079 19:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:24.339 spare_delay 00:26:24.339 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:24.598 [2024-06-10 19:10:39.215293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:24.598 [2024-06-10 19:10:39.215332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.599 [2024-06-10 19:10:39.215351] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11faf50 00:26:24.599 [2024-06-10 19:10:39.215363] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.599 [2024-06-10 19:10:39.216757] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.599 [2024-06-10 19:10:39.216783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:24.599 spare 00:26:24.599 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:24.858 [2024-06-10 19:10:39.427875] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.858 [2024-06-10 19:10:39.428997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:24.858 [2024-06-10 19:10:39.429046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:24.858 [2024-06-10 19:10:39.429087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:24.858 [2024-06-10 19:10:39.429157] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11fba70 00:26:24.858 [2024-06-10 19:10:39.429167] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:24.858 [2024-06-10 19:10:39.429353] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11ffed0 00:26:24.858 [2024-06-10 19:10:39.429491] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11fba70 00:26:24.858 [2024-06-10 19:10:39.429501] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11fba70 00:26:24.858 [2024-06-10 19:10:39.429610] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.858 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.119 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.119 "name": "raid_bdev1", 00:26:25.119 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:25.119 "strip_size_kb": 0, 00:26:25.119 "state": "online", 00:26:25.119 "raid_level": "raid1", 00:26:25.119 "superblock": false, 00:26:25.119 "num_base_bdevs": 4, 00:26:25.119 "num_base_bdevs_discovered": 4, 00:26:25.119 "num_base_bdevs_operational": 4, 00:26:25.119 "base_bdevs_list": [ 00:26:25.119 { 00:26:25.119 "name": "BaseBdev1", 00:26:25.119 "uuid": "d10152c8-aacb-5619-95d3-3cf418fd9e28", 00:26:25.119 "is_configured": true, 00:26:25.119 "data_offset": 0, 00:26:25.119 "data_size": 65536 00:26:25.119 }, 00:26:25.119 { 00:26:25.119 "name": "BaseBdev2", 00:26:25.119 "uuid": "9622e4c4-2563-5d79-9fc5-29d55623f09a", 00:26:25.119 "is_configured": true, 00:26:25.119 "data_offset": 0, 00:26:25.119 "data_size": 65536 00:26:25.119 }, 00:26:25.119 { 00:26:25.119 "name": "BaseBdev3", 00:26:25.119 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:25.119 "is_configured": true, 00:26:25.119 "data_offset": 0, 00:26:25.119 "data_size": 65536 00:26:25.119 }, 00:26:25.119 { 00:26:25.119 "name": "BaseBdev4", 00:26:25.119 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:25.119 "is_configured": true, 00:26:25.119 "data_offset": 0, 00:26:25.119 "data_size": 65536 00:26:25.119 } 00:26:25.119 ] 00:26:25.119 }' 00:26:25.119 19:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.119 19:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:25.688 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:25.688 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:25.948 [2024-06-10 19:10:40.458820] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:25.948 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:26:25.948 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.948 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:26.207 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:26:26.207 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:26:26.207 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:26.207 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:26.207 [2024-06-10 19:10:40.801424] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11fa020 00:26:26.207 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:26.207 Zero copy mechanism will not be used. 00:26:26.207 Running I/O for 60 seconds... 00:26:26.207 [2024-06-10 19:10:40.919049] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:26.207 [2024-06-10 19:10:40.933923] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x11fa020 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.467 19:10:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.726 19:10:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:26.727 "name": "raid_bdev1", 00:26:26.727 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:26.727 "strip_size_kb": 0, 00:26:26.727 "state": "online", 00:26:26.727 "raid_level": "raid1", 00:26:26.727 "superblock": false, 00:26:26.727 "num_base_bdevs": 4, 00:26:26.727 "num_base_bdevs_discovered": 3, 00:26:26.727 "num_base_bdevs_operational": 3, 00:26:26.727 "base_bdevs_list": [ 00:26:26.727 { 00:26:26.727 "name": null, 00:26:26.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.727 "is_configured": false, 00:26:26.727 "data_offset": 0, 00:26:26.727 "data_size": 65536 00:26:26.727 }, 00:26:26.727 { 00:26:26.727 "name": "BaseBdev2", 00:26:26.727 "uuid": "9622e4c4-2563-5d79-9fc5-29d55623f09a", 00:26:26.727 "is_configured": true, 00:26:26.727 "data_offset": 0, 00:26:26.727 "data_size": 65536 00:26:26.727 }, 00:26:26.727 { 00:26:26.727 "name": "BaseBdev3", 00:26:26.727 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:26.727 "is_configured": true, 00:26:26.727 "data_offset": 0, 00:26:26.727 "data_size": 65536 00:26:26.727 }, 00:26:26.727 { 00:26:26.727 "name": "BaseBdev4", 00:26:26.727 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:26.727 "is_configured": true, 00:26:26.727 "data_offset": 0, 00:26:26.727 "data_size": 65536 00:26:26.727 } 00:26:26.727 ] 00:26:26.727 }' 00:26:26.727 19:10:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:26.727 19:10:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:27.295 19:10:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:27.295 [2024-06-10 19:10:42.017833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:27.556 19:10:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:27.556 [2024-06-10 19:10:42.086478] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1297550 00:26:27.556 [2024-06-10 19:10:42.088722] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:27.556 [2024-06-10 19:10:42.198770] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:27.556 [2024-06-10 19:10:42.199877] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:27.818 [2024-06-10 19:10:42.445844] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:27.818 [2024-06-10 19:10:42.446420] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:28.078 [2024-06-10 19:10:42.789282] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:28.078 [2024-06-10 19:10:42.789641] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:28.336 [2024-06-10 19:10:42.999997] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.336 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.595 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:28.595 "name": "raid_bdev1", 00:26:28.595 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:28.595 "strip_size_kb": 0, 00:26:28.595 "state": "online", 00:26:28.595 "raid_level": "raid1", 00:26:28.595 "superblock": false, 00:26:28.595 "num_base_bdevs": 4, 00:26:28.595 "num_base_bdevs_discovered": 4, 00:26:28.595 "num_base_bdevs_operational": 4, 00:26:28.595 "process": { 00:26:28.595 "type": "rebuild", 00:26:28.595 "target": "spare", 00:26:28.595 "progress": { 00:26:28.595 "blocks": 12288, 00:26:28.595 "percent": 18 00:26:28.595 } 00:26:28.595 }, 00:26:28.595 "base_bdevs_list": [ 00:26:28.595 { 00:26:28.595 "name": "spare", 00:26:28.595 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:28.595 "is_configured": true, 00:26:28.595 "data_offset": 0, 00:26:28.595 "data_size": 65536 00:26:28.595 }, 00:26:28.595 { 00:26:28.595 "name": "BaseBdev2", 00:26:28.595 "uuid": "9622e4c4-2563-5d79-9fc5-29d55623f09a", 00:26:28.595 "is_configured": true, 00:26:28.595 "data_offset": 0, 00:26:28.595 "data_size": 65536 00:26:28.595 }, 00:26:28.595 { 00:26:28.595 "name": "BaseBdev3", 00:26:28.595 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:28.595 "is_configured": true, 00:26:28.595 "data_offset": 0, 00:26:28.595 "data_size": 65536 00:26:28.595 }, 00:26:28.595 { 00:26:28.595 "name": "BaseBdev4", 00:26:28.595 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:28.595 "is_configured": true, 00:26:28.595 "data_offset": 0, 00:26:28.595 "data_size": 65536 00:26:28.595 } 00:26:28.595 ] 00:26:28.595 }' 00:26:28.595 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:28.595 [2024-06-10 19:10:43.334033] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:28.855 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:28.855 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:28.855 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:28.855 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:29.113 [2024-06-10 19:10:43.615103] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:29.113 [2024-06-10 19:10:43.674537] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:29.113 [2024-06-10 19:10:43.776675] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:29.113 [2024-06-10 19:10:43.796651] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.113 [2024-06-10 19:10:43.796676] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:29.113 [2024-06-10 19:10:43.796685] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:29.113 [2024-06-10 19:10:43.833453] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x11fa020 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.372 19:10:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.372 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.372 "name": "raid_bdev1", 00:26:29.372 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:29.372 "strip_size_kb": 0, 00:26:29.372 "state": "online", 00:26:29.372 "raid_level": "raid1", 00:26:29.372 "superblock": false, 00:26:29.372 "num_base_bdevs": 4, 00:26:29.372 "num_base_bdevs_discovered": 3, 00:26:29.372 "num_base_bdevs_operational": 3, 00:26:29.372 "base_bdevs_list": [ 00:26:29.372 { 00:26:29.372 "name": null, 00:26:29.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.372 "is_configured": false, 00:26:29.372 "data_offset": 0, 00:26:29.372 "data_size": 65536 00:26:29.372 }, 00:26:29.372 { 00:26:29.372 "name": "BaseBdev2", 00:26:29.372 "uuid": "9622e4c4-2563-5d79-9fc5-29d55623f09a", 00:26:29.372 "is_configured": true, 00:26:29.372 "data_offset": 0, 00:26:29.372 "data_size": 65536 00:26:29.372 }, 00:26:29.372 { 00:26:29.372 "name": "BaseBdev3", 00:26:29.372 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:29.372 "is_configured": true, 00:26:29.372 "data_offset": 0, 00:26:29.372 "data_size": 65536 00:26:29.372 }, 00:26:29.372 { 00:26:29.372 "name": "BaseBdev4", 00:26:29.372 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:29.372 "is_configured": true, 00:26:29.372 "data_offset": 0, 00:26:29.372 "data_size": 65536 00:26:29.372 } 00:26:29.372 ] 00:26:29.372 }' 00:26:29.372 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.372 19:10:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.311 "name": "raid_bdev1", 00:26:30.311 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:30.311 "strip_size_kb": 0, 00:26:30.311 "state": "online", 00:26:30.311 "raid_level": "raid1", 00:26:30.311 "superblock": false, 00:26:30.311 "num_base_bdevs": 4, 00:26:30.311 "num_base_bdevs_discovered": 3, 00:26:30.311 "num_base_bdevs_operational": 3, 00:26:30.311 "base_bdevs_list": [ 00:26:30.311 { 00:26:30.311 "name": null, 00:26:30.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.311 "is_configured": false, 00:26:30.311 "data_offset": 0, 00:26:30.311 "data_size": 65536 00:26:30.311 }, 00:26:30.311 { 00:26:30.311 "name": "BaseBdev2", 00:26:30.311 "uuid": "9622e4c4-2563-5d79-9fc5-29d55623f09a", 00:26:30.311 "is_configured": true, 00:26:30.311 "data_offset": 0, 00:26:30.311 "data_size": 65536 00:26:30.311 }, 00:26:30.311 { 00:26:30.311 "name": "BaseBdev3", 00:26:30.311 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:30.311 "is_configured": true, 00:26:30.311 "data_offset": 0, 00:26:30.311 "data_size": 65536 00:26:30.311 }, 00:26:30.311 { 00:26:30.311 "name": "BaseBdev4", 00:26:30.311 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:30.311 "is_configured": true, 00:26:30.311 "data_offset": 0, 00:26:30.311 "data_size": 65536 00:26:30.311 } 00:26:30.311 ] 00:26:30.311 }' 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:30.311 19:10:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:30.311 19:10:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:30.311 19:10:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:30.570 [2024-06-10 19:10:45.217441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:30.570 19:10:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:30.570 [2024-06-10 19:10:45.260728] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1396920 00:26:30.570 [2024-06-10 19:10:45.262126] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:30.830 [2024-06-10 19:10:45.381643] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:30.830 [2024-06-10 19:10:45.381921] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:30.830 [2024-06-10 19:10:45.525180] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:31.397 [2024-06-10 19:10:45.911671] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:31.397 [2024-06-10 19:10:46.032187] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:31.397 [2024-06-10 19:10:46.032342] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.656 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.656 [2024-06-10 19:10:46.411791] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:31.656 [2024-06-10 19:10:46.411955] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:31.915 "name": "raid_bdev1", 00:26:31.915 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:31.915 "strip_size_kb": 0, 00:26:31.915 "state": "online", 00:26:31.915 "raid_level": "raid1", 00:26:31.915 "superblock": false, 00:26:31.915 "num_base_bdevs": 4, 00:26:31.915 "num_base_bdevs_discovered": 4, 00:26:31.915 "num_base_bdevs_operational": 4, 00:26:31.915 "process": { 00:26:31.915 "type": "rebuild", 00:26:31.915 "target": "spare", 00:26:31.915 "progress": { 00:26:31.915 "blocks": 16384, 00:26:31.915 "percent": 25 00:26:31.915 } 00:26:31.915 }, 00:26:31.915 "base_bdevs_list": [ 00:26:31.915 { 00:26:31.915 "name": "spare", 00:26:31.915 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:31.915 "is_configured": true, 00:26:31.915 "data_offset": 0, 00:26:31.915 "data_size": 65536 00:26:31.915 }, 00:26:31.915 { 00:26:31.915 "name": "BaseBdev2", 00:26:31.915 "uuid": "9622e4c4-2563-5d79-9fc5-29d55623f09a", 00:26:31.915 "is_configured": true, 00:26:31.915 "data_offset": 0, 00:26:31.915 "data_size": 65536 00:26:31.915 }, 00:26:31.915 { 00:26:31.915 "name": "BaseBdev3", 00:26:31.915 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:31.915 "is_configured": true, 00:26:31.915 "data_offset": 0, 00:26:31.915 "data_size": 65536 00:26:31.915 }, 00:26:31.915 { 00:26:31.915 "name": "BaseBdev4", 00:26:31.915 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:31.915 "is_configured": true, 00:26:31.915 "data_offset": 0, 00:26:31.915 "data_size": 65536 00:26:31.915 } 00:26:31.915 ] 00:26:31.915 }' 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:26:31.915 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:32.174 [2024-06-10 19:10:46.753297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:32.174 [2024-06-10 19:10:46.842519] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x11fa020 00:26:32.174 [2024-06-10 19:10:46.842543] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1396920 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.174 19:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.433 "name": "raid_bdev1", 00:26:32.433 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:32.433 "strip_size_kb": 0, 00:26:32.433 "state": "online", 00:26:32.433 "raid_level": "raid1", 00:26:32.433 "superblock": false, 00:26:32.433 "num_base_bdevs": 4, 00:26:32.433 "num_base_bdevs_discovered": 3, 00:26:32.433 "num_base_bdevs_operational": 3, 00:26:32.433 "process": { 00:26:32.433 "type": "rebuild", 00:26:32.433 "target": "spare", 00:26:32.433 "progress": { 00:26:32.433 "blocks": 22528, 00:26:32.433 "percent": 34 00:26:32.433 } 00:26:32.433 }, 00:26:32.433 "base_bdevs_list": [ 00:26:32.433 { 00:26:32.433 "name": "spare", 00:26:32.433 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:32.433 "is_configured": true, 00:26:32.433 "data_offset": 0, 00:26:32.433 "data_size": 65536 00:26:32.433 }, 00:26:32.433 { 00:26:32.433 "name": null, 00:26:32.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.433 "is_configured": false, 00:26:32.433 "data_offset": 0, 00:26:32.433 "data_size": 65536 00:26:32.433 }, 00:26:32.433 { 00:26:32.433 "name": "BaseBdev3", 00:26:32.433 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:32.433 "is_configured": true, 00:26:32.433 "data_offset": 0, 00:26:32.433 "data_size": 65536 00:26:32.433 }, 00:26:32.433 { 00:26:32.433 "name": "BaseBdev4", 00:26:32.433 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:32.433 "is_configured": true, 00:26:32.433 "data_offset": 0, 00:26:32.433 "data_size": 65536 00:26:32.433 } 00:26:32.433 ] 00:26:32.433 }' 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=872 00:26:32.433 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.692 [2024-06-10 19:10:47.243881] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:26:32.692 [2024-06-10 19:10:47.363521] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.692 "name": "raid_bdev1", 00:26:32.692 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:32.692 "strip_size_kb": 0, 00:26:32.692 "state": "online", 00:26:32.692 "raid_level": "raid1", 00:26:32.692 "superblock": false, 00:26:32.692 "num_base_bdevs": 4, 00:26:32.692 "num_base_bdevs_discovered": 3, 00:26:32.692 "num_base_bdevs_operational": 3, 00:26:32.692 "process": { 00:26:32.692 "type": "rebuild", 00:26:32.692 "target": "spare", 00:26:32.692 "progress": { 00:26:32.692 "blocks": 28672, 00:26:32.692 "percent": 43 00:26:32.692 } 00:26:32.692 }, 00:26:32.692 "base_bdevs_list": [ 00:26:32.692 { 00:26:32.692 "name": "spare", 00:26:32.692 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:32.692 "is_configured": true, 00:26:32.692 "data_offset": 0, 00:26:32.692 "data_size": 65536 00:26:32.692 }, 00:26:32.692 { 00:26:32.692 "name": null, 00:26:32.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.692 "is_configured": false, 00:26:32.692 "data_offset": 0, 00:26:32.692 "data_size": 65536 00:26:32.692 }, 00:26:32.692 { 00:26:32.692 "name": "BaseBdev3", 00:26:32.692 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:32.692 "is_configured": true, 00:26:32.692 "data_offset": 0, 00:26:32.692 "data_size": 65536 00:26:32.692 }, 00:26:32.692 { 00:26:32.692 "name": "BaseBdev4", 00:26:32.692 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:32.692 "is_configured": true, 00:26:32.692 "data_offset": 0, 00:26:32.692 "data_size": 65536 00:26:32.692 } 00:26:32.692 ] 00:26:32.692 }' 00:26:32.692 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.952 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:32.952 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.952 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:32.952 19:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:32.952 [2024-06-10 19:10:47.685236] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:26:33.521 [2024-06-10 19:10:48.125060] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:26:33.521 [2024-06-10 19:10:48.226300] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.780 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.038 [2024-06-10 19:10:48.579721] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:26:34.038 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.038 "name": "raid_bdev1", 00:26:34.038 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:34.038 "strip_size_kb": 0, 00:26:34.038 "state": "online", 00:26:34.038 "raid_level": "raid1", 00:26:34.038 "superblock": false, 00:26:34.038 "num_base_bdevs": 4, 00:26:34.038 "num_base_bdevs_discovered": 3, 00:26:34.038 "num_base_bdevs_operational": 3, 00:26:34.038 "process": { 00:26:34.038 "type": "rebuild", 00:26:34.038 "target": "spare", 00:26:34.038 "progress": { 00:26:34.038 "blocks": 47104, 00:26:34.038 "percent": 71 00:26:34.038 } 00:26:34.038 }, 00:26:34.038 "base_bdevs_list": [ 00:26:34.038 { 00:26:34.038 "name": "spare", 00:26:34.038 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:34.038 "is_configured": true, 00:26:34.038 "data_offset": 0, 00:26:34.038 "data_size": 65536 00:26:34.038 }, 00:26:34.038 { 00:26:34.038 "name": null, 00:26:34.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.039 "is_configured": false, 00:26:34.039 "data_offset": 0, 00:26:34.039 "data_size": 65536 00:26:34.039 }, 00:26:34.039 { 00:26:34.039 "name": "BaseBdev3", 00:26:34.039 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:34.039 "is_configured": true, 00:26:34.039 "data_offset": 0, 00:26:34.039 "data_size": 65536 00:26:34.039 }, 00:26:34.039 { 00:26:34.039 "name": "BaseBdev4", 00:26:34.039 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:34.039 "is_configured": true, 00:26:34.039 "data_offset": 0, 00:26:34.039 "data_size": 65536 00:26:34.039 } 00:26:34.039 ] 00:26:34.039 }' 00:26:34.039 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.039 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.039 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.297 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.297 19:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:35.232 [2024-06-10 19:10:49.709636] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:35.232 [2024-06-10 19:10:49.809936] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:35.232 [2024-06-10 19:10:49.811052] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.232 19:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.492 "name": "raid_bdev1", 00:26:35.492 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:35.492 "strip_size_kb": 0, 00:26:35.492 "state": "online", 00:26:35.492 "raid_level": "raid1", 00:26:35.492 "superblock": false, 00:26:35.492 "num_base_bdevs": 4, 00:26:35.492 "num_base_bdevs_discovered": 3, 00:26:35.492 "num_base_bdevs_operational": 3, 00:26:35.492 "base_bdevs_list": [ 00:26:35.492 { 00:26:35.492 "name": "spare", 00:26:35.492 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:35.492 "is_configured": true, 00:26:35.492 "data_offset": 0, 00:26:35.492 "data_size": 65536 00:26:35.492 }, 00:26:35.492 { 00:26:35.492 "name": null, 00:26:35.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.492 "is_configured": false, 00:26:35.492 "data_offset": 0, 00:26:35.492 "data_size": 65536 00:26:35.492 }, 00:26:35.492 { 00:26:35.492 "name": "BaseBdev3", 00:26:35.492 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:35.492 "is_configured": true, 00:26:35.492 "data_offset": 0, 00:26:35.492 "data_size": 65536 00:26:35.492 }, 00:26:35.492 { 00:26:35.492 "name": "BaseBdev4", 00:26:35.492 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:35.492 "is_configured": true, 00:26:35.492 "data_offset": 0, 00:26:35.492 "data_size": 65536 00:26:35.492 } 00:26:35.492 ] 00:26:35.492 }' 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.492 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.750 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.750 "name": "raid_bdev1", 00:26:35.750 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:35.750 "strip_size_kb": 0, 00:26:35.750 "state": "online", 00:26:35.750 "raid_level": "raid1", 00:26:35.750 "superblock": false, 00:26:35.750 "num_base_bdevs": 4, 00:26:35.750 "num_base_bdevs_discovered": 3, 00:26:35.750 "num_base_bdevs_operational": 3, 00:26:35.750 "base_bdevs_list": [ 00:26:35.750 { 00:26:35.750 "name": "spare", 00:26:35.750 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:35.750 "is_configured": true, 00:26:35.750 "data_offset": 0, 00:26:35.750 "data_size": 65536 00:26:35.750 }, 00:26:35.750 { 00:26:35.750 "name": null, 00:26:35.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.750 "is_configured": false, 00:26:35.750 "data_offset": 0, 00:26:35.750 "data_size": 65536 00:26:35.750 }, 00:26:35.750 { 00:26:35.750 "name": "BaseBdev3", 00:26:35.750 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:35.750 "is_configured": true, 00:26:35.750 "data_offset": 0, 00:26:35.750 "data_size": 65536 00:26:35.750 }, 00:26:35.750 { 00:26:35.750 "name": "BaseBdev4", 00:26:35.750 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:35.750 "is_configured": true, 00:26:35.750 "data_offset": 0, 00:26:35.750 "data_size": 65536 00:26:35.750 } 00:26:35.750 ] 00:26:35.750 }' 00:26:35.750 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.750 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.751 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.010 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.010 "name": "raid_bdev1", 00:26:36.010 "uuid": "84ca4ad0-35e9-4f47-873e-54d0e25d7e0e", 00:26:36.010 "strip_size_kb": 0, 00:26:36.010 "state": "online", 00:26:36.010 "raid_level": "raid1", 00:26:36.010 "superblock": false, 00:26:36.010 "num_base_bdevs": 4, 00:26:36.010 "num_base_bdevs_discovered": 3, 00:26:36.010 "num_base_bdevs_operational": 3, 00:26:36.010 "base_bdevs_list": [ 00:26:36.010 { 00:26:36.010 "name": "spare", 00:26:36.010 "uuid": "b790e232-b11a-5539-bc73-0802fe334eca", 00:26:36.010 "is_configured": true, 00:26:36.010 "data_offset": 0, 00:26:36.010 "data_size": 65536 00:26:36.010 }, 00:26:36.010 { 00:26:36.010 "name": null, 00:26:36.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.010 "is_configured": false, 00:26:36.010 "data_offset": 0, 00:26:36.010 "data_size": 65536 00:26:36.010 }, 00:26:36.010 { 00:26:36.010 "name": "BaseBdev3", 00:26:36.010 "uuid": "8e3f78e9-4c73-5bfd-8159-df0a2f474dc0", 00:26:36.010 "is_configured": true, 00:26:36.010 "data_offset": 0, 00:26:36.010 "data_size": 65536 00:26:36.010 }, 00:26:36.010 { 00:26:36.010 "name": "BaseBdev4", 00:26:36.010 "uuid": "594831e8-9a76-5b60-b27c-dc9e13b8c63a", 00:26:36.010 "is_configured": true, 00:26:36.010 "data_offset": 0, 00:26:36.010 "data_size": 65536 00:26:36.010 } 00:26:36.010 ] 00:26:36.010 }' 00:26:36.010 19:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.010 19:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:36.577 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:36.837 [2024-06-10 19:10:51.449850] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.837 [2024-06-10 19:10:51.449880] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:36.837 00:26:36.837 Latency(us) 00:26:36.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.837 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:36.837 raid_bdev1 : 10.64 94.86 284.59 0.00 0.00 14548.21 280.17 119957.09 00:26:36.837 =================================================================================================================== 00:26:36.837 Total : 94.86 284.59 0.00 0.00 14548.21 280.17 119957.09 00:26:36.837 [2024-06-10 19:10:51.469513] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.837 [2024-06-10 19:10:51.469537] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.837 [2024-06-10 19:10:51.469631] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:36.837 [2024-06-10 19:10:51.469643] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11fba70 name raid_bdev1, state offline 00:26:36.837 0 00:26:36.837 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.837 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.096 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:26:37.355 /dev/nbd0 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:37.355 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.356 1+0 records in 00:26:37.356 1+0 records out 00:26:37.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237361 s, 17.3 MB/s 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.356 19:10:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:37.356 /dev/nbd1 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:37.356 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.356 1+0 records in 00:26:37.356 1+0 records out 00:26:37.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211915 s, 19.3 MB/s 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.615 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.874 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:37.874 /dev/nbd1 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local i 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # break 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:38.156 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.157 1+0 records in 00:26:38.157 1+0 records out 00:26:38.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293487 s, 14.0 MB/s 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # size=4096 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # return 0 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:38.157 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:38.443 19:10:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 1775079 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@949 -- # '[' -z 1775079 ']' 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # kill -0 1775079 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # uname 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1775079 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1775079' 00:26:38.703 killing process with pid 1775079 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # kill 1775079 00:26:38.703 Received shutdown signal, test time was about 12.462404 seconds 00:26:38.703 00:26:38.703 Latency(us) 00:26:38.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.703 =================================================================================================================== 00:26:38.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.703 [2024-06-10 19:10:53.296360] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:38.703 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # wait 1775079 00:26:38.703 [2024-06-10 19:10:53.332621] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:38.962 19:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:26:38.962 00:26:38.962 real 0m17.750s 00:26:38.962 user 0m27.222s 00:26:38.962 sys 0m3.290s 00:26:38.962 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:38.962 19:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:38.962 ************************************ 00:26:38.962 END TEST raid_rebuild_test_io 00:26:38.962 ************************************ 00:26:38.962 19:10:53 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:26:38.962 19:10:53 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:26:38.962 19:10:53 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:38.962 19:10:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:38.962 ************************************ 00:26:38.962 START TEST raid_rebuild_test_sb_io 00:26:38.962 ************************************ 00:26:38.962 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 4 true true true 00:26:38.962 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:38.962 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1778277 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1778277 /var/tmp/spdk-raid.sock 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@830 -- # '[' -z 1778277 ']' 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:38.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:38.963 19:10:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:38.963 [2024-06-10 19:10:53.680400] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:26:38.963 [2024-06-10 19:10:53.680454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778277 ] 00:26:38.963 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:38.963 Zero copy mechanism will not be used. 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.0 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.1 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.2 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.3 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.4 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.5 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.6 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:01.7 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.0 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.1 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.2 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.3 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.4 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.5 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.6 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b6:02.7 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.0 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.1 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.2 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.3 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.4 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.5 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.6 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:01.7 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.0 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.1 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.2 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.3 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.4 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.5 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.6 cannot be used 00:26:39.222 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:39.222 EAL: Requested device 0000:b8:02.7 cannot be used 00:26:39.222 [2024-06-10 19:10:53.804746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.222 [2024-06-10 19:10:53.889108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.222 [2024-06-10 19:10:53.952674] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.222 [2024-06-10 19:10:53.952723] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:40.165 19:10:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:40.165 19:10:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@863 -- # return 0 00:26:40.165 19:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:40.165 19:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:40.165 BaseBdev1_malloc 00:26:40.165 19:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:40.425 [2024-06-10 19:10:55.013380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:40.425 [2024-06-10 19:10:55.013421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.425 [2024-06-10 19:10:55.013443] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1117200 00:26:40.425 [2024-06-10 19:10:55.013455] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.425 [2024-06-10 19:10:55.014958] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.425 [2024-06-10 19:10:55.014984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:40.425 BaseBdev1 00:26:40.425 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:40.425 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:40.683 BaseBdev2_malloc 00:26:40.683 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:40.942 [2024-06-10 19:10:55.467174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:40.942 [2024-06-10 19:10:55.467214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.942 [2024-06-10 19:10:55.467233] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12aed90 00:26:40.942 [2024-06-10 19:10:55.467245] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.942 [2024-06-10 19:10:55.468659] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.942 [2024-06-10 19:10:55.468686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:40.942 BaseBdev2 00:26:40.942 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:40.942 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:41.201 BaseBdev3_malloc 00:26:41.201 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:41.201 [2024-06-10 19:10:55.924625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:41.201 [2024-06-10 19:10:55.924666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.201 [2024-06-10 19:10:55.924686] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b1540 00:26:41.201 [2024-06-10 19:10:55.924698] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.201 [2024-06-10 19:10:55.926066] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.201 [2024-06-10 19:10:55.926090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:41.201 BaseBdev3 00:26:41.201 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:41.201 19:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:41.459 BaseBdev4_malloc 00:26:41.459 19:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:41.717 [2024-06-10 19:10:56.373996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:41.717 [2024-06-10 19:10:56.374035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.717 [2024-06-10 19:10:56.374055] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b1b40 00:26:41.717 [2024-06-10 19:10:56.374067] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.717 [2024-06-10 19:10:56.375384] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.717 [2024-06-10 19:10:56.375409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:41.717 BaseBdev4 00:26:41.717 19:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:41.976 spare_malloc 00:26:41.976 19:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:42.235 spare_delay 00:26:42.235 19:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:42.493 [2024-06-10 19:10:57.056077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:42.493 [2024-06-10 19:10:57.056115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.493 [2024-06-10 19:10:57.056138] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1111f50 00:26:42.493 [2024-06-10 19:10:57.056150] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.493 [2024-06-10 19:10:57.057538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.493 [2024-06-10 19:10:57.057564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:42.493 spare 00:26:42.493 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:42.751 [2024-06-10 19:10:57.280694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:42.751 [2024-06-10 19:10:57.281853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.751 [2024-06-10 19:10:57.281902] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:42.751 [2024-06-10 19:10:57.281943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:42.751 [2024-06-10 19:10:57.282121] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1112a70 00:26:42.751 [2024-06-10 19:10:57.282132] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:42.751 [2024-06-10 19:10:57.282310] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1116ed0 00:26:42.751 [2024-06-10 19:10:57.282442] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1112a70 00:26:42.751 [2024-06-10 19:10:57.282451] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1112a70 00:26:42.751 [2024-06-10 19:10:57.282540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.751 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.009 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.009 "name": "raid_bdev1", 00:26:43.009 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:43.009 "strip_size_kb": 0, 00:26:43.009 "state": "online", 00:26:43.009 "raid_level": "raid1", 00:26:43.009 "superblock": true, 00:26:43.010 "num_base_bdevs": 4, 00:26:43.010 "num_base_bdevs_discovered": 4, 00:26:43.010 "num_base_bdevs_operational": 4, 00:26:43.010 "base_bdevs_list": [ 00:26:43.010 { 00:26:43.010 "name": "BaseBdev1", 00:26:43.010 "uuid": "1342b4f9-4bd2-5789-9aa1-7c319ee11a0d", 00:26:43.010 "is_configured": true, 00:26:43.010 "data_offset": 2048, 00:26:43.010 "data_size": 63488 00:26:43.010 }, 00:26:43.010 { 00:26:43.010 "name": "BaseBdev2", 00:26:43.010 "uuid": "18c4ade3-9cac-5cd2-831d-826af0c9086f", 00:26:43.010 "is_configured": true, 00:26:43.010 "data_offset": 2048, 00:26:43.010 "data_size": 63488 00:26:43.010 }, 00:26:43.010 { 00:26:43.010 "name": "BaseBdev3", 00:26:43.010 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:43.010 "is_configured": true, 00:26:43.010 "data_offset": 2048, 00:26:43.010 "data_size": 63488 00:26:43.010 }, 00:26:43.010 { 00:26:43.010 "name": "BaseBdev4", 00:26:43.010 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:43.010 "is_configured": true, 00:26:43.010 "data_offset": 2048, 00:26:43.010 "data_size": 63488 00:26:43.010 } 00:26:43.010 ] 00:26:43.010 }' 00:26:43.010 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.010 19:10:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:43.574 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:43.574 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:43.574 [2024-06-10 19:10:58.295582] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:43.574 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:26:43.574 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.574 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:43.832 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:26:43.832 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:26:43.832 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:43.832 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:44.090 [2024-06-10 19:10:58.646248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x110dcb0 00:26:44.090 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:44.090 Zero copy mechanism will not be used. 00:26:44.090 Running I/O for 60 seconds... 00:26:44.090 [2024-06-10 19:10:58.755881] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:44.090 [2024-06-10 19:10:58.770823] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x110dcb0 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.090 19:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.348 19:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.348 "name": "raid_bdev1", 00:26:44.349 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:44.349 "strip_size_kb": 0, 00:26:44.349 "state": "online", 00:26:44.349 "raid_level": "raid1", 00:26:44.349 "superblock": true, 00:26:44.349 "num_base_bdevs": 4, 00:26:44.349 "num_base_bdevs_discovered": 3, 00:26:44.349 "num_base_bdevs_operational": 3, 00:26:44.349 "base_bdevs_list": [ 00:26:44.349 { 00:26:44.349 "name": null, 00:26:44.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.349 "is_configured": false, 00:26:44.349 "data_offset": 2048, 00:26:44.349 "data_size": 63488 00:26:44.349 }, 00:26:44.349 { 00:26:44.349 "name": "BaseBdev2", 00:26:44.349 "uuid": "18c4ade3-9cac-5cd2-831d-826af0c9086f", 00:26:44.349 "is_configured": true, 00:26:44.349 "data_offset": 2048, 00:26:44.349 "data_size": 63488 00:26:44.349 }, 00:26:44.349 { 00:26:44.349 "name": "BaseBdev3", 00:26:44.349 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:44.349 "is_configured": true, 00:26:44.349 "data_offset": 2048, 00:26:44.349 "data_size": 63488 00:26:44.349 }, 00:26:44.349 { 00:26:44.349 "name": "BaseBdev4", 00:26:44.349 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:44.349 "is_configured": true, 00:26:44.349 "data_offset": 2048, 00:26:44.349 "data_size": 63488 00:26:44.349 } 00:26:44.349 ] 00:26:44.349 }' 00:26:44.349 19:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.349 19:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:44.916 19:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:45.175 [2024-06-10 19:10:59.845008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:45.175 19:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:45.175 [2024-06-10 19:10:59.905384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11ae4f0 00:26:45.175 [2024-06-10 19:10:59.907629] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:45.434 [2024-06-10 19:11:00.172130] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:45.434 [2024-06-10 19:11:00.172715] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:46.001 [2024-06-10 19:11:00.514759] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:46.260 [2024-06-10 19:11:00.760147] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.260 19:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.260 [2024-06-10 19:11:00.989952] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:46.260 [2024-06-10 19:11:00.990235] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:46.519 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.519 "name": "raid_bdev1", 00:26:46.519 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:46.519 "strip_size_kb": 0, 00:26:46.519 "state": "online", 00:26:46.519 "raid_level": "raid1", 00:26:46.519 "superblock": true, 00:26:46.519 "num_base_bdevs": 4, 00:26:46.519 "num_base_bdevs_discovered": 4, 00:26:46.519 "num_base_bdevs_operational": 4, 00:26:46.519 "process": { 00:26:46.519 "type": "rebuild", 00:26:46.519 "target": "spare", 00:26:46.519 "progress": { 00:26:46.519 "blocks": 14336, 00:26:46.519 "percent": 22 00:26:46.519 } 00:26:46.519 }, 00:26:46.519 "base_bdevs_list": [ 00:26:46.519 { 00:26:46.519 "name": "spare", 00:26:46.519 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:46.519 "is_configured": true, 00:26:46.519 "data_offset": 2048, 00:26:46.519 "data_size": 63488 00:26:46.519 }, 00:26:46.519 { 00:26:46.519 "name": "BaseBdev2", 00:26:46.519 "uuid": "18c4ade3-9cac-5cd2-831d-826af0c9086f", 00:26:46.519 "is_configured": true, 00:26:46.519 "data_offset": 2048, 00:26:46.519 "data_size": 63488 00:26:46.519 }, 00:26:46.519 { 00:26:46.519 "name": "BaseBdev3", 00:26:46.519 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:46.519 "is_configured": true, 00:26:46.519 "data_offset": 2048, 00:26:46.519 "data_size": 63488 00:26:46.519 }, 00:26:46.519 { 00:26:46.519 "name": "BaseBdev4", 00:26:46.519 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:46.519 "is_configured": true, 00:26:46.519 "data_offset": 2048, 00:26:46.519 "data_size": 63488 00:26:46.519 } 00:26:46.519 ] 00:26:46.519 }' 00:26:46.519 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:46.519 [2024-06-10 19:11:01.193463] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:46.519 [2024-06-10 19:11:01.193699] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:46.519 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.519 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.519 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.519 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:46.778 [2024-06-10 19:11:01.447720] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.036 [2024-06-10 19:11:01.624881] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:47.036 [2024-06-10 19:11:01.626432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.036 [2024-06-10 19:11:01.626459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.036 [2024-06-10 19:11:01.626468] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:47.036 [2024-06-10 19:11:01.654768] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x110dcb0 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.036 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.295 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.295 "name": "raid_bdev1", 00:26:47.295 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:47.295 "strip_size_kb": 0, 00:26:47.295 "state": "online", 00:26:47.295 "raid_level": "raid1", 00:26:47.295 "superblock": true, 00:26:47.295 "num_base_bdevs": 4, 00:26:47.295 "num_base_bdevs_discovered": 3, 00:26:47.295 "num_base_bdevs_operational": 3, 00:26:47.295 "base_bdevs_list": [ 00:26:47.295 { 00:26:47.295 "name": null, 00:26:47.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.295 "is_configured": false, 00:26:47.295 "data_offset": 2048, 00:26:47.295 "data_size": 63488 00:26:47.295 }, 00:26:47.295 { 00:26:47.295 "name": "BaseBdev2", 00:26:47.295 "uuid": "18c4ade3-9cac-5cd2-831d-826af0c9086f", 00:26:47.295 "is_configured": true, 00:26:47.295 "data_offset": 2048, 00:26:47.295 "data_size": 63488 00:26:47.295 }, 00:26:47.295 { 00:26:47.295 "name": "BaseBdev3", 00:26:47.295 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:47.295 "is_configured": true, 00:26:47.295 "data_offset": 2048, 00:26:47.295 "data_size": 63488 00:26:47.295 }, 00:26:47.295 { 00:26:47.295 "name": "BaseBdev4", 00:26:47.295 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:47.295 "is_configured": true, 00:26:47.295 "data_offset": 2048, 00:26:47.295 "data_size": 63488 00:26:47.295 } 00:26:47.295 ] 00:26:47.295 }' 00:26:47.295 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.295 19:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.864 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.122 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.122 "name": "raid_bdev1", 00:26:48.122 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:48.122 "strip_size_kb": 0, 00:26:48.122 "state": "online", 00:26:48.122 "raid_level": "raid1", 00:26:48.122 "superblock": true, 00:26:48.122 "num_base_bdevs": 4, 00:26:48.122 "num_base_bdevs_discovered": 3, 00:26:48.122 "num_base_bdevs_operational": 3, 00:26:48.122 "base_bdevs_list": [ 00:26:48.122 { 00:26:48.122 "name": null, 00:26:48.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.122 "is_configured": false, 00:26:48.122 "data_offset": 2048, 00:26:48.122 "data_size": 63488 00:26:48.122 }, 00:26:48.122 { 00:26:48.122 "name": "BaseBdev2", 00:26:48.122 "uuid": "18c4ade3-9cac-5cd2-831d-826af0c9086f", 00:26:48.122 "is_configured": true, 00:26:48.122 "data_offset": 2048, 00:26:48.122 "data_size": 63488 00:26:48.122 }, 00:26:48.122 { 00:26:48.122 "name": "BaseBdev3", 00:26:48.122 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:48.122 "is_configured": true, 00:26:48.122 "data_offset": 2048, 00:26:48.122 "data_size": 63488 00:26:48.122 }, 00:26:48.122 { 00:26:48.122 "name": "BaseBdev4", 00:26:48.122 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:48.122 "is_configured": true, 00:26:48.122 "data_offset": 2048, 00:26:48.122 "data_size": 63488 00:26:48.122 } 00:26:48.122 ] 00:26:48.122 }' 00:26:48.122 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.122 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:48.122 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.122 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:48.122 19:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:48.381 [2024-06-10 19:11:03.056196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:48.381 [2024-06-10 19:11:03.115261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11ab870 00:26:48.381 19:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:48.381 [2024-06-10 19:11:03.116981] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:48.640 [2024-06-10 19:11:03.225959] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:48.640 [2024-06-10 19:11:03.227035] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:48.899 [2024-06-10 19:11:03.448446] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:49.158 [2024-06-10 19:11:03.780419] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:49.158 [2024-06-10 19:11:03.780719] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:49.418 [2024-06-10 19:11:04.002392] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.418 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.677 [2024-06-10 19:11:04.226352] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:49.677 [2024-06-10 19:11:04.226794] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:49.677 [2024-06-10 19:11:04.344970] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:49.677 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.677 "name": "raid_bdev1", 00:26:49.677 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:49.677 "strip_size_kb": 0, 00:26:49.677 "state": "online", 00:26:49.677 "raid_level": "raid1", 00:26:49.677 "superblock": true, 00:26:49.677 "num_base_bdevs": 4, 00:26:49.677 "num_base_bdevs_discovered": 4, 00:26:49.677 "num_base_bdevs_operational": 4, 00:26:49.677 "process": { 00:26:49.677 "type": "rebuild", 00:26:49.677 "target": "spare", 00:26:49.677 "progress": { 00:26:49.677 "blocks": 14336, 00:26:49.677 "percent": 22 00:26:49.677 } 00:26:49.677 }, 00:26:49.677 "base_bdevs_list": [ 00:26:49.677 { 00:26:49.677 "name": "spare", 00:26:49.677 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:49.677 "is_configured": true, 00:26:49.677 "data_offset": 2048, 00:26:49.677 "data_size": 63488 00:26:49.677 }, 00:26:49.677 { 00:26:49.677 "name": "BaseBdev2", 00:26:49.677 "uuid": "18c4ade3-9cac-5cd2-831d-826af0c9086f", 00:26:49.677 "is_configured": true, 00:26:49.677 "data_offset": 2048, 00:26:49.677 "data_size": 63488 00:26:49.677 }, 00:26:49.677 { 00:26:49.677 "name": "BaseBdev3", 00:26:49.677 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:49.677 "is_configured": true, 00:26:49.677 "data_offset": 2048, 00:26:49.677 "data_size": 63488 00:26:49.677 }, 00:26:49.677 { 00:26:49.677 "name": "BaseBdev4", 00:26:49.677 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:49.677 "is_configured": true, 00:26:49.677 "data_offset": 2048, 00:26:49.677 "data_size": 63488 00:26:49.677 } 00:26:49.677 ] 00:26:49.677 }' 00:26:49.677 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.677 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.677 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:49.936 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:26:49.936 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:49.936 [2024-06-10 19:11:04.647190] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:50.195 [2024-06-10 19:11:04.945859] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x110dcb0 00:26:50.195 [2024-06-10 19:11:04.945888] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x11ab870 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.455 19:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.455 [2024-06-10 19:11:05.066384] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:50.455 [2024-06-10 19:11:05.066789] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.715 "name": "raid_bdev1", 00:26:50.715 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:50.715 "strip_size_kb": 0, 00:26:50.715 "state": "online", 00:26:50.715 "raid_level": "raid1", 00:26:50.715 "superblock": true, 00:26:50.715 "num_base_bdevs": 4, 00:26:50.715 "num_base_bdevs_discovered": 3, 00:26:50.715 "num_base_bdevs_operational": 3, 00:26:50.715 "process": { 00:26:50.715 "type": "rebuild", 00:26:50.715 "target": "spare", 00:26:50.715 "progress": { 00:26:50.715 "blocks": 22528, 00:26:50.715 "percent": 35 00:26:50.715 } 00:26:50.715 }, 00:26:50.715 "base_bdevs_list": [ 00:26:50.715 { 00:26:50.715 "name": "spare", 00:26:50.715 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:50.715 "is_configured": true, 00:26:50.715 "data_offset": 2048, 00:26:50.715 "data_size": 63488 00:26:50.715 }, 00:26:50.715 { 00:26:50.715 "name": null, 00:26:50.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.715 "is_configured": false, 00:26:50.715 "data_offset": 2048, 00:26:50.715 "data_size": 63488 00:26:50.715 }, 00:26:50.715 { 00:26:50.715 "name": "BaseBdev3", 00:26:50.715 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:50.715 "is_configured": true, 00:26:50.715 "data_offset": 2048, 00:26:50.715 "data_size": 63488 00:26:50.715 }, 00:26:50.715 { 00:26:50.715 "name": "BaseBdev4", 00:26:50.715 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:50.715 "is_configured": true, 00:26:50.715 "data_offset": 2048, 00:26:50.715 "data_size": 63488 00:26:50.715 } 00:26:50.715 ] 00:26:50.715 }' 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=890 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.715 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.974 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.974 "name": "raid_bdev1", 00:26:50.974 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:50.974 "strip_size_kb": 0, 00:26:50.974 "state": "online", 00:26:50.974 "raid_level": "raid1", 00:26:50.974 "superblock": true, 00:26:50.974 "num_base_bdevs": 4, 00:26:50.974 "num_base_bdevs_discovered": 3, 00:26:50.974 "num_base_bdevs_operational": 3, 00:26:50.974 "process": { 00:26:50.974 "type": "rebuild", 00:26:50.974 "target": "spare", 00:26:50.974 "progress": { 00:26:50.974 "blocks": 26624, 00:26:50.974 "percent": 41 00:26:50.974 } 00:26:50.974 }, 00:26:50.974 "base_bdevs_list": [ 00:26:50.974 { 00:26:50.974 "name": "spare", 00:26:50.974 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:50.974 "is_configured": true, 00:26:50.974 "data_offset": 2048, 00:26:50.974 "data_size": 63488 00:26:50.974 }, 00:26:50.974 { 00:26:50.974 "name": null, 00:26:50.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.974 "is_configured": false, 00:26:50.975 "data_offset": 2048, 00:26:50.975 "data_size": 63488 00:26:50.975 }, 00:26:50.975 { 00:26:50.975 "name": "BaseBdev3", 00:26:50.975 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:50.975 "is_configured": true, 00:26:50.975 "data_offset": 2048, 00:26:50.975 "data_size": 63488 00:26:50.975 }, 00:26:50.975 { 00:26:50.975 "name": "BaseBdev4", 00:26:50.975 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:50.975 "is_configured": true, 00:26:50.975 "data_offset": 2048, 00:26:50.975 "data_size": 63488 00:26:50.975 } 00:26:50.975 ] 00:26:50.975 }' 00:26:50.975 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.975 [2024-06-10 19:11:05.560215] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:26:50.975 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.975 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.975 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.975 19:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:51.234 [2024-06-10 19:11:05.921310] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:26:51.801 [2024-06-10 19:11:06.249929] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:26:51.801 [2024-06-10 19:11:06.469745] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.059 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.059 [2024-06-10 19:11:06.798379] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:26:52.317 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.317 "name": "raid_bdev1", 00:26:52.317 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:52.317 "strip_size_kb": 0, 00:26:52.317 "state": "online", 00:26:52.317 "raid_level": "raid1", 00:26:52.317 "superblock": true, 00:26:52.317 "num_base_bdevs": 4, 00:26:52.317 "num_base_bdevs_discovered": 3, 00:26:52.317 "num_base_bdevs_operational": 3, 00:26:52.317 "process": { 00:26:52.317 "type": "rebuild", 00:26:52.317 "target": "spare", 00:26:52.317 "progress": { 00:26:52.317 "blocks": 47104, 00:26:52.317 "percent": 74 00:26:52.317 } 00:26:52.317 }, 00:26:52.317 "base_bdevs_list": [ 00:26:52.317 { 00:26:52.317 "name": "spare", 00:26:52.317 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:52.317 "is_configured": true, 00:26:52.317 "data_offset": 2048, 00:26:52.317 "data_size": 63488 00:26:52.317 }, 00:26:52.317 { 00:26:52.317 "name": null, 00:26:52.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.317 "is_configured": false, 00:26:52.317 "data_offset": 2048, 00:26:52.317 "data_size": 63488 00:26:52.318 }, 00:26:52.318 { 00:26:52.318 "name": "BaseBdev3", 00:26:52.318 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:52.318 "is_configured": true, 00:26:52.318 "data_offset": 2048, 00:26:52.318 "data_size": 63488 00:26:52.318 }, 00:26:52.318 { 00:26:52.318 "name": "BaseBdev4", 00:26:52.318 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:52.318 "is_configured": true, 00:26:52.318 "data_offset": 2048, 00:26:52.318 "data_size": 63488 00:26:52.318 } 00:26:52.318 ] 00:26:52.318 }' 00:26:52.318 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.318 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.318 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.318 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.318 19:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:52.318 [2024-06-10 19:11:07.018526] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:26:52.884 [2024-06-10 19:11:07.355311] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:26:52.884 [2024-06-10 19:11:07.356064] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:26:53.142 [2024-06-10 19:11:07.805471] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:53.401 [2024-06-10 19:11:07.913081] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:53.401 [2024-06-10 19:11:07.916226] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.401 19:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.658 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:53.659 "name": "raid_bdev1", 00:26:53.659 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:53.659 "strip_size_kb": 0, 00:26:53.659 "state": "online", 00:26:53.659 "raid_level": "raid1", 00:26:53.659 "superblock": true, 00:26:53.659 "num_base_bdevs": 4, 00:26:53.659 "num_base_bdevs_discovered": 3, 00:26:53.659 "num_base_bdevs_operational": 3, 00:26:53.659 "base_bdevs_list": [ 00:26:53.659 { 00:26:53.659 "name": "spare", 00:26:53.659 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:53.659 "is_configured": true, 00:26:53.659 "data_offset": 2048, 00:26:53.659 "data_size": 63488 00:26:53.659 }, 00:26:53.659 { 00:26:53.659 "name": null, 00:26:53.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.659 "is_configured": false, 00:26:53.659 "data_offset": 2048, 00:26:53.659 "data_size": 63488 00:26:53.659 }, 00:26:53.659 { 00:26:53.659 "name": "BaseBdev3", 00:26:53.659 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:53.659 "is_configured": true, 00:26:53.659 "data_offset": 2048, 00:26:53.659 "data_size": 63488 00:26:53.659 }, 00:26:53.659 { 00:26:53.659 "name": "BaseBdev4", 00:26:53.659 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:53.659 "is_configured": true, 00:26:53.659 "data_offset": 2048, 00:26:53.659 "data_size": 63488 00:26:53.659 } 00:26:53.659 ] 00:26:53.659 }' 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.659 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:53.917 "name": "raid_bdev1", 00:26:53.917 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:53.917 "strip_size_kb": 0, 00:26:53.917 "state": "online", 00:26:53.917 "raid_level": "raid1", 00:26:53.917 "superblock": true, 00:26:53.917 "num_base_bdevs": 4, 00:26:53.917 "num_base_bdevs_discovered": 3, 00:26:53.917 "num_base_bdevs_operational": 3, 00:26:53.917 "base_bdevs_list": [ 00:26:53.917 { 00:26:53.917 "name": "spare", 00:26:53.917 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:53.917 "is_configured": true, 00:26:53.917 "data_offset": 2048, 00:26:53.917 "data_size": 63488 00:26:53.917 }, 00:26:53.917 { 00:26:53.917 "name": null, 00:26:53.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.917 "is_configured": false, 00:26:53.917 "data_offset": 2048, 00:26:53.917 "data_size": 63488 00:26:53.917 }, 00:26:53.917 { 00:26:53.917 "name": "BaseBdev3", 00:26:53.917 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:53.917 "is_configured": true, 00:26:53.917 "data_offset": 2048, 00:26:53.917 "data_size": 63488 00:26:53.917 }, 00:26:53.917 { 00:26:53.917 "name": "BaseBdev4", 00:26:53.917 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:53.917 "is_configured": true, 00:26:53.917 "data_offset": 2048, 00:26:53.917 "data_size": 63488 00:26:53.917 } 00:26:53.917 ] 00:26:53.917 }' 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.917 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.175 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.175 "name": "raid_bdev1", 00:26:54.175 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:54.175 "strip_size_kb": 0, 00:26:54.175 "state": "online", 00:26:54.175 "raid_level": "raid1", 00:26:54.175 "superblock": true, 00:26:54.175 "num_base_bdevs": 4, 00:26:54.175 "num_base_bdevs_discovered": 3, 00:26:54.175 "num_base_bdevs_operational": 3, 00:26:54.175 "base_bdevs_list": [ 00:26:54.175 { 00:26:54.175 "name": "spare", 00:26:54.175 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:54.175 "is_configured": true, 00:26:54.175 "data_offset": 2048, 00:26:54.175 "data_size": 63488 00:26:54.175 }, 00:26:54.175 { 00:26:54.175 "name": null, 00:26:54.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.175 "is_configured": false, 00:26:54.175 "data_offset": 2048, 00:26:54.175 "data_size": 63488 00:26:54.175 }, 00:26:54.175 { 00:26:54.175 "name": "BaseBdev3", 00:26:54.175 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:54.175 "is_configured": true, 00:26:54.175 "data_offset": 2048, 00:26:54.175 "data_size": 63488 00:26:54.175 }, 00:26:54.175 { 00:26:54.175 "name": "BaseBdev4", 00:26:54.175 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:54.175 "is_configured": true, 00:26:54.175 "data_offset": 2048, 00:26:54.175 "data_size": 63488 00:26:54.175 } 00:26:54.175 ] 00:26:54.175 }' 00:26:54.175 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.175 19:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:54.742 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:55.000 [2024-06-10 19:11:09.592087] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:55.000 [2024-06-10 19:11:09.592114] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.000 00:26:55.000 Latency(us) 00:26:55.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.000 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:55.000 raid_bdev1 : 11.01 104.23 312.68 0.00 0.00 13090.24 271.97 118279.37 00:26:55.000 =================================================================================================================== 00:26:55.000 Total : 104.23 312.68 0.00 0.00 13090.24 271.97 118279.37 00:26:55.000 [2024-06-10 19:11:09.684034] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.000 [2024-06-10 19:11:09.684060] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.000 [2024-06-10 19:11:09.684147] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.000 [2024-06-10 19:11:09.684159] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1112a70 name raid_bdev1, state offline 00:26:55.000 0 00:26:55.000 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:26:55.000 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:55.258 19:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:26:55.517 /dev/nbd0 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.517 1+0 records in 00:26:55.517 1+0 records out 00:26:55.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259199 s, 15.8 MB/s 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:55.517 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:55.776 /dev/nbd1 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.776 1+0 records in 00:26:55.776 1+0 records out 00:26:55.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185522 s, 22.1 MB/s 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:55.776 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.035 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:56.294 19:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:56.294 /dev/nbd1 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local i 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # break 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.552 1+0 records in 00:26:56.552 1+0 records out 00:26:56.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209922 s, 19.5 MB/s 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # size=4096 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # return 0 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.552 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.810 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:57.068 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:57.327 [2024-06-10 19:11:11.970157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:57.327 [2024-06-10 19:11:11.970198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.327 [2024-06-10 19:11:11.970220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c54a0 00:26:57.327 [2024-06-10 19:11:11.970232] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.327 [2024-06-10 19:11:11.971978] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.327 [2024-06-10 19:11:11.972005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:57.327 [2024-06-10 19:11:11.972082] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:57.327 [2024-06-10 19:11:11.972108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:57.327 [2024-06-10 19:11:11.972200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:57.327 [2024-06-10 19:11:11.972264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:57.327 spare 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.327 19:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.327 [2024-06-10 19:11:12.072580] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x110dff0 00:26:57.327 [2024-06-10 19:11:12.072596] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:57.327 [2024-06-10 19:11:12.072774] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12afc40 00:26:57.327 [2024-06-10 19:11:12.072910] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x110dff0 00:26:57.327 [2024-06-10 19:11:12.072920] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x110dff0 00:26:57.327 [2024-06-10 19:11:12.073019] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.649 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.649 "name": "raid_bdev1", 00:26:57.649 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:57.649 "strip_size_kb": 0, 00:26:57.649 "state": "online", 00:26:57.649 "raid_level": "raid1", 00:26:57.649 "superblock": true, 00:26:57.649 "num_base_bdevs": 4, 00:26:57.649 "num_base_bdevs_discovered": 3, 00:26:57.649 "num_base_bdevs_operational": 3, 00:26:57.649 "base_bdevs_list": [ 00:26:57.649 { 00:26:57.649 "name": "spare", 00:26:57.649 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:57.649 "is_configured": true, 00:26:57.649 "data_offset": 2048, 00:26:57.649 "data_size": 63488 00:26:57.649 }, 00:26:57.649 { 00:26:57.649 "name": null, 00:26:57.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.649 "is_configured": false, 00:26:57.649 "data_offset": 2048, 00:26:57.649 "data_size": 63488 00:26:57.649 }, 00:26:57.649 { 00:26:57.649 "name": "BaseBdev3", 00:26:57.649 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:57.649 "is_configured": true, 00:26:57.649 "data_offset": 2048, 00:26:57.649 "data_size": 63488 00:26:57.649 }, 00:26:57.649 { 00:26:57.649 "name": "BaseBdev4", 00:26:57.649 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:57.649 "is_configured": true, 00:26:57.649 "data_offset": 2048, 00:26:57.649 "data_size": 63488 00:26:57.649 } 00:26:57.649 ] 00:26:57.649 }' 00:26:57.649 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.649 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.220 19:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:58.479 "name": "raid_bdev1", 00:26:58.479 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:58.479 "strip_size_kb": 0, 00:26:58.479 "state": "online", 00:26:58.479 "raid_level": "raid1", 00:26:58.479 "superblock": true, 00:26:58.479 "num_base_bdevs": 4, 00:26:58.479 "num_base_bdevs_discovered": 3, 00:26:58.479 "num_base_bdevs_operational": 3, 00:26:58.479 "base_bdevs_list": [ 00:26:58.479 { 00:26:58.479 "name": "spare", 00:26:58.479 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:26:58.479 "is_configured": true, 00:26:58.479 "data_offset": 2048, 00:26:58.479 "data_size": 63488 00:26:58.479 }, 00:26:58.479 { 00:26:58.479 "name": null, 00:26:58.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.479 "is_configured": false, 00:26:58.479 "data_offset": 2048, 00:26:58.479 "data_size": 63488 00:26:58.479 }, 00:26:58.479 { 00:26:58.479 "name": "BaseBdev3", 00:26:58.479 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:58.479 "is_configured": true, 00:26:58.479 "data_offset": 2048, 00:26:58.479 "data_size": 63488 00:26:58.479 }, 00:26:58.479 { 00:26:58.479 "name": "BaseBdev4", 00:26:58.479 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:58.479 "is_configured": true, 00:26:58.479 "data_offset": 2048, 00:26:58.479 "data_size": 63488 00:26:58.479 } 00:26:58.479 ] 00:26:58.479 }' 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.479 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:58.738 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.738 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:58.996 [2024-06-10 19:11:13.554681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.996 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.255 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.255 "name": "raid_bdev1", 00:26:59.255 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:26:59.255 "strip_size_kb": 0, 00:26:59.255 "state": "online", 00:26:59.255 "raid_level": "raid1", 00:26:59.255 "superblock": true, 00:26:59.255 "num_base_bdevs": 4, 00:26:59.255 "num_base_bdevs_discovered": 2, 00:26:59.255 "num_base_bdevs_operational": 2, 00:26:59.255 "base_bdevs_list": [ 00:26:59.255 { 00:26:59.255 "name": null, 00:26:59.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.255 "is_configured": false, 00:26:59.255 "data_offset": 2048, 00:26:59.255 "data_size": 63488 00:26:59.255 }, 00:26:59.255 { 00:26:59.255 "name": null, 00:26:59.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.255 "is_configured": false, 00:26:59.255 "data_offset": 2048, 00:26:59.255 "data_size": 63488 00:26:59.255 }, 00:26:59.255 { 00:26:59.255 "name": "BaseBdev3", 00:26:59.255 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:26:59.255 "is_configured": true, 00:26:59.255 "data_offset": 2048, 00:26:59.255 "data_size": 63488 00:26:59.255 }, 00:26:59.255 { 00:26:59.255 "name": "BaseBdev4", 00:26:59.255 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:26:59.255 "is_configured": true, 00:26:59.255 "data_offset": 2048, 00:26:59.255 "data_size": 63488 00:26:59.255 } 00:26:59.255 ] 00:26:59.255 }' 00:26:59.255 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.255 19:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:59.822 19:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:59.822 [2024-06-10 19:11:14.513517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:59.822 [2024-06-10 19:11:14.513650] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:59.822 [2024-06-10 19:11:14.513666] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:59.822 [2024-06-10 19:11:14.513693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:59.822 [2024-06-10 19:11:14.517926] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe141c0 00:26:59.822 [2024-06-10 19:11:14.520101] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:59.822 19:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:01.198 "name": "raid_bdev1", 00:27:01.198 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:01.198 "strip_size_kb": 0, 00:27:01.198 "state": "online", 00:27:01.198 "raid_level": "raid1", 00:27:01.198 "superblock": true, 00:27:01.198 "num_base_bdevs": 4, 00:27:01.198 "num_base_bdevs_discovered": 3, 00:27:01.198 "num_base_bdevs_operational": 3, 00:27:01.198 "process": { 00:27:01.198 "type": "rebuild", 00:27:01.198 "target": "spare", 00:27:01.198 "progress": { 00:27:01.198 "blocks": 22528, 00:27:01.198 "percent": 35 00:27:01.198 } 00:27:01.198 }, 00:27:01.198 "base_bdevs_list": [ 00:27:01.198 { 00:27:01.198 "name": "spare", 00:27:01.198 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:27:01.198 "is_configured": true, 00:27:01.198 "data_offset": 2048, 00:27:01.198 "data_size": 63488 00:27:01.198 }, 00:27:01.198 { 00:27:01.198 "name": null, 00:27:01.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.198 "is_configured": false, 00:27:01.198 "data_offset": 2048, 00:27:01.198 "data_size": 63488 00:27:01.198 }, 00:27:01.198 { 00:27:01.198 "name": "BaseBdev3", 00:27:01.198 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:01.198 "is_configured": true, 00:27:01.198 "data_offset": 2048, 00:27:01.198 "data_size": 63488 00:27:01.198 }, 00:27:01.198 { 00:27:01.198 "name": "BaseBdev4", 00:27:01.198 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:01.198 "is_configured": true, 00:27:01.198 "data_offset": 2048, 00:27:01.198 "data_size": 63488 00:27:01.198 } 00:27:01.198 ] 00:27:01.198 }' 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:01.198 19:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:01.457 [2024-06-10 19:11:16.015146] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:01.457 [2024-06-10 19:11:16.031113] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:01.457 [2024-06-10 19:11:16.031159] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.457 [2024-06-10 19:11:16.031175] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:01.457 [2024-06-10 19:11:16.031182] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.457 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.715 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.715 "name": "raid_bdev1", 00:27:01.715 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:01.715 "strip_size_kb": 0, 00:27:01.715 "state": "online", 00:27:01.715 "raid_level": "raid1", 00:27:01.715 "superblock": true, 00:27:01.715 "num_base_bdevs": 4, 00:27:01.715 "num_base_bdevs_discovered": 2, 00:27:01.715 "num_base_bdevs_operational": 2, 00:27:01.715 "base_bdevs_list": [ 00:27:01.715 { 00:27:01.715 "name": null, 00:27:01.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.715 "is_configured": false, 00:27:01.715 "data_offset": 2048, 00:27:01.715 "data_size": 63488 00:27:01.715 }, 00:27:01.715 { 00:27:01.715 "name": null, 00:27:01.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.715 "is_configured": false, 00:27:01.715 "data_offset": 2048, 00:27:01.715 "data_size": 63488 00:27:01.715 }, 00:27:01.715 { 00:27:01.715 "name": "BaseBdev3", 00:27:01.715 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:01.715 "is_configured": true, 00:27:01.715 "data_offset": 2048, 00:27:01.715 "data_size": 63488 00:27:01.715 }, 00:27:01.715 { 00:27:01.715 "name": "BaseBdev4", 00:27:01.715 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:01.715 "is_configured": true, 00:27:01.715 "data_offset": 2048, 00:27:01.715 "data_size": 63488 00:27:01.715 } 00:27:01.715 ] 00:27:01.715 }' 00:27:01.715 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.715 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:02.283 19:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:02.283 [2024-06-10 19:11:16.985962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:02.283 [2024-06-10 19:11:16.986010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.283 [2024-06-10 19:11:16.986033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12afac0 00:27:02.283 [2024-06-10 19:11:16.986044] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.283 [2024-06-10 19:11:16.986372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.283 [2024-06-10 19:11:16.986388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:02.283 [2024-06-10 19:11:16.986462] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:02.283 [2024-06-10 19:11:16.986473] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:02.283 [2024-06-10 19:11:16.986483] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:02.283 [2024-06-10 19:11:16.986500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:02.283 [2024-06-10 19:11:16.990756] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x110e660 00:27:02.283 spare 00:27:02.283 [2024-06-10 19:11:16.992133] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:02.283 19:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:03.659 "name": "raid_bdev1", 00:27:03.659 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:03.659 "strip_size_kb": 0, 00:27:03.659 "state": "online", 00:27:03.659 "raid_level": "raid1", 00:27:03.659 "superblock": true, 00:27:03.659 "num_base_bdevs": 4, 00:27:03.659 "num_base_bdevs_discovered": 3, 00:27:03.659 "num_base_bdevs_operational": 3, 00:27:03.659 "process": { 00:27:03.659 "type": "rebuild", 00:27:03.659 "target": "spare", 00:27:03.659 "progress": { 00:27:03.659 "blocks": 22528, 00:27:03.659 "percent": 35 00:27:03.659 } 00:27:03.659 }, 00:27:03.659 "base_bdevs_list": [ 00:27:03.659 { 00:27:03.659 "name": "spare", 00:27:03.659 "uuid": "3311d0f3-020b-5120-a3ee-e5ecf4e098b5", 00:27:03.659 "is_configured": true, 00:27:03.659 "data_offset": 2048, 00:27:03.659 "data_size": 63488 00:27:03.659 }, 00:27:03.659 { 00:27:03.659 "name": null, 00:27:03.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.659 "is_configured": false, 00:27:03.659 "data_offset": 2048, 00:27:03.659 "data_size": 63488 00:27:03.659 }, 00:27:03.659 { 00:27:03.659 "name": "BaseBdev3", 00:27:03.659 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:03.659 "is_configured": true, 00:27:03.659 "data_offset": 2048, 00:27:03.659 "data_size": 63488 00:27:03.659 }, 00:27:03.659 { 00:27:03.659 "name": "BaseBdev4", 00:27:03.659 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:03.659 "is_configured": true, 00:27:03.659 "data_offset": 2048, 00:27:03.659 "data_size": 63488 00:27:03.659 } 00:27:03.659 ] 00:27:03.659 }' 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:03.659 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:03.918 [2024-06-10 19:11:18.487163] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:03.918 [2024-06-10 19:11:18.503131] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:03.918 [2024-06-10 19:11:18.503179] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.918 [2024-06-10 19:11:18.503194] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:03.918 [2024-06-10 19:11:18.503202] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.918 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.204 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.204 "name": "raid_bdev1", 00:27:04.204 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:04.204 "strip_size_kb": 0, 00:27:04.204 "state": "online", 00:27:04.204 "raid_level": "raid1", 00:27:04.204 "superblock": true, 00:27:04.204 "num_base_bdevs": 4, 00:27:04.204 "num_base_bdevs_discovered": 2, 00:27:04.204 "num_base_bdevs_operational": 2, 00:27:04.204 "base_bdevs_list": [ 00:27:04.204 { 00:27:04.204 "name": null, 00:27:04.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.204 "is_configured": false, 00:27:04.204 "data_offset": 2048, 00:27:04.204 "data_size": 63488 00:27:04.204 }, 00:27:04.204 { 00:27:04.204 "name": null, 00:27:04.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.204 "is_configured": false, 00:27:04.204 "data_offset": 2048, 00:27:04.204 "data_size": 63488 00:27:04.204 }, 00:27:04.204 { 00:27:04.204 "name": "BaseBdev3", 00:27:04.204 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:04.204 "is_configured": true, 00:27:04.204 "data_offset": 2048, 00:27:04.204 "data_size": 63488 00:27:04.204 }, 00:27:04.204 { 00:27:04.204 "name": "BaseBdev4", 00:27:04.204 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:04.204 "is_configured": true, 00:27:04.204 "data_offset": 2048, 00:27:04.204 "data_size": 63488 00:27:04.204 } 00:27:04.204 ] 00:27:04.204 }' 00:27:04.204 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.204 19:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.770 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.029 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:05.029 "name": "raid_bdev1", 00:27:05.029 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:05.029 "strip_size_kb": 0, 00:27:05.029 "state": "online", 00:27:05.029 "raid_level": "raid1", 00:27:05.029 "superblock": true, 00:27:05.029 "num_base_bdevs": 4, 00:27:05.029 "num_base_bdevs_discovered": 2, 00:27:05.029 "num_base_bdevs_operational": 2, 00:27:05.029 "base_bdevs_list": [ 00:27:05.029 { 00:27:05.029 "name": null, 00:27:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.029 "is_configured": false, 00:27:05.029 "data_offset": 2048, 00:27:05.029 "data_size": 63488 00:27:05.029 }, 00:27:05.029 { 00:27:05.029 "name": null, 00:27:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.029 "is_configured": false, 00:27:05.029 "data_offset": 2048, 00:27:05.029 "data_size": 63488 00:27:05.029 }, 00:27:05.029 { 00:27:05.029 "name": "BaseBdev3", 00:27:05.029 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:05.029 "is_configured": true, 00:27:05.029 "data_offset": 2048, 00:27:05.029 "data_size": 63488 00:27:05.029 }, 00:27:05.029 { 00:27:05.029 "name": "BaseBdev4", 00:27:05.029 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:05.029 "is_configured": true, 00:27:05.029 "data_offset": 2048, 00:27:05.029 "data_size": 63488 00:27:05.029 } 00:27:05.029 ] 00:27:05.029 }' 00:27:05.029 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:05.029 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:05.029 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:05.029 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:05.029 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:05.287 19:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:05.545 [2024-06-10 19:11:20.080025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:05.545 [2024-06-10 19:11:20.080071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.545 [2024-06-10 19:11:20.080094] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11141e0 00:27:05.545 [2024-06-10 19:11:20.080106] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.545 [2024-06-10 19:11:20.080410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.545 [2024-06-10 19:11:20.080426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:05.545 [2024-06-10 19:11:20.080484] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:05.545 [2024-06-10 19:11:20.080494] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:05.545 [2024-06-10 19:11:20.080504] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:05.545 BaseBdev1 00:27:05.545 19:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.480 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.738 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.738 "name": "raid_bdev1", 00:27:06.738 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:06.738 "strip_size_kb": 0, 00:27:06.738 "state": "online", 00:27:06.738 "raid_level": "raid1", 00:27:06.738 "superblock": true, 00:27:06.738 "num_base_bdevs": 4, 00:27:06.738 "num_base_bdevs_discovered": 2, 00:27:06.738 "num_base_bdevs_operational": 2, 00:27:06.738 "base_bdevs_list": [ 00:27:06.738 { 00:27:06.738 "name": null, 00:27:06.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.738 "is_configured": false, 00:27:06.738 "data_offset": 2048, 00:27:06.738 "data_size": 63488 00:27:06.738 }, 00:27:06.738 { 00:27:06.738 "name": null, 00:27:06.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.738 "is_configured": false, 00:27:06.738 "data_offset": 2048, 00:27:06.738 "data_size": 63488 00:27:06.738 }, 00:27:06.738 { 00:27:06.738 "name": "BaseBdev3", 00:27:06.738 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:06.738 "is_configured": true, 00:27:06.738 "data_offset": 2048, 00:27:06.738 "data_size": 63488 00:27:06.738 }, 00:27:06.738 { 00:27:06.738 "name": "BaseBdev4", 00:27:06.738 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:06.738 "is_configured": true, 00:27:06.738 "data_offset": 2048, 00:27:06.738 "data_size": 63488 00:27:06.738 } 00:27:06.738 ] 00:27:06.738 }' 00:27:06.738 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.738 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.304 19:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.566 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:07.566 "name": "raid_bdev1", 00:27:07.566 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:07.566 "strip_size_kb": 0, 00:27:07.566 "state": "online", 00:27:07.566 "raid_level": "raid1", 00:27:07.566 "superblock": true, 00:27:07.566 "num_base_bdevs": 4, 00:27:07.566 "num_base_bdevs_discovered": 2, 00:27:07.566 "num_base_bdevs_operational": 2, 00:27:07.566 "base_bdevs_list": [ 00:27:07.566 { 00:27:07.566 "name": null, 00:27:07.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.566 "is_configured": false, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 }, 00:27:07.566 { 00:27:07.566 "name": null, 00:27:07.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.566 "is_configured": false, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 }, 00:27:07.566 { 00:27:07.566 "name": "BaseBdev3", 00:27:07.566 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:07.566 "is_configured": true, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 }, 00:27:07.566 { 00:27:07.566 "name": "BaseBdev4", 00:27:07.566 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:07.566 "is_configured": true, 00:27:07.566 "data_offset": 2048, 00:27:07.567 "data_size": 63488 00:27:07.567 } 00:27:07.567 ] 00:27:07.567 }' 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@649 -- # local es=0 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:07.567 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:07.826 [2024-06-10 19:11:22.434536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:07.826 [2024-06-10 19:11:22.434647] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:07.826 [2024-06-10 19:11:22.434662] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:07.826 request: 00:27:07.826 { 00:27:07.826 "raid_bdev": "raid_bdev1", 00:27:07.826 "base_bdev": "BaseBdev1", 00:27:07.826 "method": "bdev_raid_add_base_bdev", 00:27:07.826 "req_id": 1 00:27:07.826 } 00:27:07.826 Got JSON-RPC error response 00:27:07.826 response: 00:27:07.826 { 00:27:07.826 "code": -22, 00:27:07.826 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:07.826 } 00:27:07.826 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # es=1 00:27:07.826 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:07.826 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:07.826 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:07.826 19:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.761 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.019 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:09.019 "name": "raid_bdev1", 00:27:09.019 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:09.019 "strip_size_kb": 0, 00:27:09.019 "state": "online", 00:27:09.019 "raid_level": "raid1", 00:27:09.019 "superblock": true, 00:27:09.019 "num_base_bdevs": 4, 00:27:09.019 "num_base_bdevs_discovered": 2, 00:27:09.020 "num_base_bdevs_operational": 2, 00:27:09.020 "base_bdevs_list": [ 00:27:09.020 { 00:27:09.020 "name": null, 00:27:09.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.020 "is_configured": false, 00:27:09.020 "data_offset": 2048, 00:27:09.020 "data_size": 63488 00:27:09.020 }, 00:27:09.020 { 00:27:09.020 "name": null, 00:27:09.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.020 "is_configured": false, 00:27:09.020 "data_offset": 2048, 00:27:09.020 "data_size": 63488 00:27:09.020 }, 00:27:09.020 { 00:27:09.020 "name": "BaseBdev3", 00:27:09.020 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:09.020 "is_configured": true, 00:27:09.020 "data_offset": 2048, 00:27:09.020 "data_size": 63488 00:27:09.020 }, 00:27:09.020 { 00:27:09.020 "name": "BaseBdev4", 00:27:09.020 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:09.020 "is_configured": true, 00:27:09.020 "data_offset": 2048, 00:27:09.020 "data_size": 63488 00:27:09.020 } 00:27:09.020 ] 00:27:09.020 }' 00:27:09.020 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:09.020 19:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.587 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.846 "name": "raid_bdev1", 00:27:09.846 "uuid": "fbc8f5f6-e467-4167-9d6f-d9e2cba10e61", 00:27:09.846 "strip_size_kb": 0, 00:27:09.846 "state": "online", 00:27:09.846 "raid_level": "raid1", 00:27:09.846 "superblock": true, 00:27:09.846 "num_base_bdevs": 4, 00:27:09.846 "num_base_bdevs_discovered": 2, 00:27:09.846 "num_base_bdevs_operational": 2, 00:27:09.846 "base_bdevs_list": [ 00:27:09.846 { 00:27:09.846 "name": null, 00:27:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.846 "is_configured": false, 00:27:09.846 "data_offset": 2048, 00:27:09.846 "data_size": 63488 00:27:09.846 }, 00:27:09.846 { 00:27:09.846 "name": null, 00:27:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.846 "is_configured": false, 00:27:09.846 "data_offset": 2048, 00:27:09.846 "data_size": 63488 00:27:09.846 }, 00:27:09.846 { 00:27:09.846 "name": "BaseBdev3", 00:27:09.846 "uuid": "5b01334f-11ff-5cd5-a4de-3e75085b08bd", 00:27:09.846 "is_configured": true, 00:27:09.846 "data_offset": 2048, 00:27:09.846 "data_size": 63488 00:27:09.846 }, 00:27:09.846 { 00:27:09.846 "name": "BaseBdev4", 00:27:09.846 "uuid": "e7b1e613-0390-5e4d-b3eb-2779120e6dc6", 00:27:09.846 "is_configured": true, 00:27:09.846 "data_offset": 2048, 00:27:09.846 "data_size": 63488 00:27:09.846 } 00:27:09.846 ] 00:27:09.846 }' 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 1778277 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@949 -- # '[' -z 1778277 ']' 00:27:09.846 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # kill -0 1778277 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # uname 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1778277 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1778277' 00:27:10.106 killing process with pid 1778277 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # kill 1778277 00:27:10.106 Received shutdown signal, test time was about 25.947045 seconds 00:27:10.106 00:27:10.106 Latency(us) 00:27:10.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.106 =================================================================================================================== 00:27:10.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.106 [2024-06-10 19:11:24.658713] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:10.106 [2024-06-10 19:11:24.658802] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.106 [2024-06-10 19:11:24.658855] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.106 [2024-06-10 19:11:24.658866] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x110dff0 name raid_bdev1, state offline 00:27:10.106 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # wait 1778277 00:27:10.106 [2024-06-10 19:11:24.695288] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:10.365 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:27:10.365 00:27:10.365 real 0m31.281s 00:27:10.365 user 0m48.813s 00:27:10.365 sys 0m4.943s 00:27:10.365 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:10.365 19:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.365 ************************************ 00:27:10.365 END TEST raid_rebuild_test_sb_io 00:27:10.365 ************************************ 00:27:10.365 19:11:24 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:27:10.365 19:11:24 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:27:10.365 19:11:24 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:27:10.365 19:11:24 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:27:10.365 19:11:24 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:10.365 19:11:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:10.365 ************************************ 00:27:10.365 START TEST raid_state_function_test_sb_4k 00:27:10.365 ************************************ 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=1784078 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1784078' 00:27:10.365 Process raid pid: 1784078 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 1784078 /var/tmp/spdk-raid.sock 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@830 -- # '[' -z 1784078 ']' 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:10.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:10.365 19:11:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.365 [2024-06-10 19:11:25.043458] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:27:10.365 [2024-06-10 19:11:25.043500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.0 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.1 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.2 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.3 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.4 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.5 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.6 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:01.7 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:02.0 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:02.1 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:02.2 cannot be used 00:27:10.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.365 EAL: Requested device 0000:b6:02.3 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b6:02.4 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b6:02.5 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b6:02.6 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b6:02.7 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.0 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.1 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.2 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.3 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.4 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.5 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.6 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:01.7 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.0 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.1 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.2 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.3 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.4 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.5 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.6 cannot be used 00:27:10.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:10.366 EAL: Requested device 0000:b8:02.7 cannot be used 00:27:10.624 [2024-06-10 19:11:25.162752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.624 [2024-06-10 19:11:25.249683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.624 [2024-06-10 19:11:25.309866] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.624 [2024-06-10 19:11:25.309901] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:11.191 19:11:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:11.191 19:11:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@863 -- # return 0 00:27:11.191 19:11:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:11.449 [2024-06-10 19:11:25.979823] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:11.449 [2024-06-10 19:11:25.979862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:11.449 [2024-06-10 19:11:25.979872] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:11.449 [2024-06-10 19:11:25.979883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:11.449 19:11:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:11.449 19:11:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.449 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.708 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.708 "name": "Existed_Raid", 00:27:11.708 "uuid": "46407729-7b43-48d7-abcc-6a55db757982", 00:27:11.708 "strip_size_kb": 0, 00:27:11.708 "state": "configuring", 00:27:11.708 "raid_level": "raid1", 00:27:11.708 "superblock": true, 00:27:11.708 "num_base_bdevs": 2, 00:27:11.708 "num_base_bdevs_discovered": 0, 00:27:11.708 "num_base_bdevs_operational": 2, 00:27:11.708 "base_bdevs_list": [ 00:27:11.708 { 00:27:11.708 "name": "BaseBdev1", 00:27:11.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.708 "is_configured": false, 00:27:11.708 "data_offset": 0, 00:27:11.708 "data_size": 0 00:27:11.708 }, 00:27:11.708 { 00:27:11.708 "name": "BaseBdev2", 00:27:11.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.708 "is_configured": false, 00:27:11.708 "data_offset": 0, 00:27:11.708 "data_size": 0 00:27:11.708 } 00:27:11.708 ] 00:27:11.708 }' 00:27:11.708 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.708 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:12.274 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:12.274 [2024-06-10 19:11:26.954247] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:12.274 [2024-06-10 19:11:26.954271] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b9f10 name Existed_Raid, state configuring 00:27:12.274 19:11:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:12.533 [2024-06-10 19:11:27.118709] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:12.533 [2024-06-10 19:11:27.118739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:12.533 [2024-06-10 19:11:27.118748] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:12.533 [2024-06-10 19:11:27.118759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:12.533 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:27:12.791 [2024-06-10 19:11:27.292568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:12.791 BaseBdev1 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local i 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:12.791 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:13.050 [ 00:27:13.050 { 00:27:13.050 "name": "BaseBdev1", 00:27:13.050 "aliases": [ 00:27:13.050 "1f5afc1c-5409-49da-b2c5-c3c0231fb33d" 00:27:13.050 ], 00:27:13.050 "product_name": "Malloc disk", 00:27:13.050 "block_size": 4096, 00:27:13.050 "num_blocks": 8192, 00:27:13.050 "uuid": "1f5afc1c-5409-49da-b2c5-c3c0231fb33d", 00:27:13.050 "assigned_rate_limits": { 00:27:13.050 "rw_ios_per_sec": 0, 00:27:13.050 "rw_mbytes_per_sec": 0, 00:27:13.050 "r_mbytes_per_sec": 0, 00:27:13.050 "w_mbytes_per_sec": 0 00:27:13.050 }, 00:27:13.050 "claimed": true, 00:27:13.050 "claim_type": "exclusive_write", 00:27:13.050 "zoned": false, 00:27:13.050 "supported_io_types": { 00:27:13.050 "read": true, 00:27:13.050 "write": true, 00:27:13.050 "unmap": true, 00:27:13.050 "write_zeroes": true, 00:27:13.050 "flush": true, 00:27:13.050 "reset": true, 00:27:13.050 "compare": false, 00:27:13.050 "compare_and_write": false, 00:27:13.050 "abort": true, 00:27:13.050 "nvme_admin": false, 00:27:13.050 "nvme_io": false 00:27:13.050 }, 00:27:13.050 "memory_domains": [ 00:27:13.050 { 00:27:13.050 "dma_device_id": "system", 00:27:13.050 "dma_device_type": 1 00:27:13.050 }, 00:27:13.050 { 00:27:13.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.050 "dma_device_type": 2 00:27:13.050 } 00:27:13.050 ], 00:27:13.050 "driver_specific": {} 00:27:13.050 } 00:27:13.050 ] 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # return 0 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.050 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.308 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.308 "name": "Existed_Raid", 00:27:13.308 "uuid": "bbaf35b9-fde3-437e-be14-118fc196753c", 00:27:13.308 "strip_size_kb": 0, 00:27:13.308 "state": "configuring", 00:27:13.308 "raid_level": "raid1", 00:27:13.308 "superblock": true, 00:27:13.308 "num_base_bdevs": 2, 00:27:13.308 "num_base_bdevs_discovered": 1, 00:27:13.308 "num_base_bdevs_operational": 2, 00:27:13.308 "base_bdevs_list": [ 00:27:13.308 { 00:27:13.308 "name": "BaseBdev1", 00:27:13.308 "uuid": "1f5afc1c-5409-49da-b2c5-c3c0231fb33d", 00:27:13.308 "is_configured": true, 00:27:13.308 "data_offset": 256, 00:27:13.308 "data_size": 7936 00:27:13.309 }, 00:27:13.309 { 00:27:13.309 "name": "BaseBdev2", 00:27:13.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.309 "is_configured": false, 00:27:13.309 "data_offset": 0, 00:27:13.309 "data_size": 0 00:27:13.309 } 00:27:13.309 ] 00:27:13.309 }' 00:27:13.309 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.309 19:11:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.875 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:14.134 [2024-06-10 19:11:28.648124] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.134 [2024-06-10 19:11:28.648159] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b9800 name Existed_Raid, state configuring 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:14.134 [2024-06-10 19:11:28.820617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.134 [2024-06-10 19:11:28.821967] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.134 [2024-06-10 19:11:28.821998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.134 19:11:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.393 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.393 "name": "Existed_Raid", 00:27:14.393 "uuid": "28aeb62e-4e16-4356-8b7a-9688cffcb1cd", 00:27:14.393 "strip_size_kb": 0, 00:27:14.393 "state": "configuring", 00:27:14.393 "raid_level": "raid1", 00:27:14.393 "superblock": true, 00:27:14.393 "num_base_bdevs": 2, 00:27:14.393 "num_base_bdevs_discovered": 1, 00:27:14.393 "num_base_bdevs_operational": 2, 00:27:14.393 "base_bdevs_list": [ 00:27:14.393 { 00:27:14.393 "name": "BaseBdev1", 00:27:14.393 "uuid": "1f5afc1c-5409-49da-b2c5-c3c0231fb33d", 00:27:14.393 "is_configured": true, 00:27:14.393 "data_offset": 256, 00:27:14.393 "data_size": 7936 00:27:14.393 }, 00:27:14.393 { 00:27:14.393 "name": "BaseBdev2", 00:27:14.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.393 "is_configured": false, 00:27:14.393 "data_offset": 0, 00:27:14.393 "data_size": 0 00:27:14.393 } 00:27:14.393 ] 00:27:14.393 }' 00:27:14.393 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.393 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.961 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:27:15.219 [2024-06-10 19:11:29.814338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:15.219 [2024-06-10 19:11:29.814476] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10ba5f0 00:27:15.219 [2024-06-10 19:11:29.814488] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:15.219 [2024-06-10 19:11:29.814655] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x126c3b0 00:27:15.219 [2024-06-10 19:11:29.814771] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10ba5f0 00:27:15.219 [2024-06-10 19:11:29.814780] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10ba5f0 00:27:15.219 [2024-06-10 19:11:29.814865] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.219 BaseBdev2 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local i 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:27:15.219 19:11:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:15.477 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:15.736 [ 00:27:15.736 { 00:27:15.736 "name": "BaseBdev2", 00:27:15.736 "aliases": [ 00:27:15.736 "05bc0f78-269e-4a35-81a4-aaa1accfa416" 00:27:15.736 ], 00:27:15.736 "product_name": "Malloc disk", 00:27:15.736 "block_size": 4096, 00:27:15.736 "num_blocks": 8192, 00:27:15.736 "uuid": "05bc0f78-269e-4a35-81a4-aaa1accfa416", 00:27:15.736 "assigned_rate_limits": { 00:27:15.736 "rw_ios_per_sec": 0, 00:27:15.736 "rw_mbytes_per_sec": 0, 00:27:15.736 "r_mbytes_per_sec": 0, 00:27:15.736 "w_mbytes_per_sec": 0 00:27:15.736 }, 00:27:15.736 "claimed": true, 00:27:15.736 "claim_type": "exclusive_write", 00:27:15.736 "zoned": false, 00:27:15.736 "supported_io_types": { 00:27:15.736 "read": true, 00:27:15.736 "write": true, 00:27:15.736 "unmap": true, 00:27:15.736 "write_zeroes": true, 00:27:15.736 "flush": true, 00:27:15.736 "reset": true, 00:27:15.736 "compare": false, 00:27:15.736 "compare_and_write": false, 00:27:15.736 "abort": true, 00:27:15.736 "nvme_admin": false, 00:27:15.736 "nvme_io": false 00:27:15.736 }, 00:27:15.736 "memory_domains": [ 00:27:15.736 { 00:27:15.736 "dma_device_id": "system", 00:27:15.736 "dma_device_type": 1 00:27:15.736 }, 00:27:15.736 { 00:27:15.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.736 "dma_device_type": 2 00:27:15.736 } 00:27:15.736 ], 00:27:15.736 "driver_specific": {} 00:27:15.736 } 00:27:15.736 ] 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # return 0 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.736 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.995 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.995 "name": "Existed_Raid", 00:27:15.995 "uuid": "28aeb62e-4e16-4356-8b7a-9688cffcb1cd", 00:27:15.995 "strip_size_kb": 0, 00:27:15.995 "state": "online", 00:27:15.995 "raid_level": "raid1", 00:27:15.995 "superblock": true, 00:27:15.995 "num_base_bdevs": 2, 00:27:15.995 "num_base_bdevs_discovered": 2, 00:27:15.995 "num_base_bdevs_operational": 2, 00:27:15.995 "base_bdevs_list": [ 00:27:15.995 { 00:27:15.995 "name": "BaseBdev1", 00:27:15.995 "uuid": "1f5afc1c-5409-49da-b2c5-c3c0231fb33d", 00:27:15.995 "is_configured": true, 00:27:15.995 "data_offset": 256, 00:27:15.995 "data_size": 7936 00:27:15.995 }, 00:27:15.995 { 00:27:15.995 "name": "BaseBdev2", 00:27:15.995 "uuid": "05bc0f78-269e-4a35-81a4-aaa1accfa416", 00:27:15.995 "is_configured": true, 00:27:15.995 "data_offset": 256, 00:27:15.995 "data_size": 7936 00:27:15.995 } 00:27:15.995 ] 00:27:15.995 }' 00:27:15.995 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.995 19:11:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:16.562 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:16.821 [2024-06-10 19:11:31.334587] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:16.821 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:16.821 "name": "Existed_Raid", 00:27:16.821 "aliases": [ 00:27:16.821 "28aeb62e-4e16-4356-8b7a-9688cffcb1cd" 00:27:16.821 ], 00:27:16.821 "product_name": "Raid Volume", 00:27:16.821 "block_size": 4096, 00:27:16.821 "num_blocks": 7936, 00:27:16.821 "uuid": "28aeb62e-4e16-4356-8b7a-9688cffcb1cd", 00:27:16.821 "assigned_rate_limits": { 00:27:16.821 "rw_ios_per_sec": 0, 00:27:16.821 "rw_mbytes_per_sec": 0, 00:27:16.821 "r_mbytes_per_sec": 0, 00:27:16.821 "w_mbytes_per_sec": 0 00:27:16.821 }, 00:27:16.821 "claimed": false, 00:27:16.821 "zoned": false, 00:27:16.821 "supported_io_types": { 00:27:16.821 "read": true, 00:27:16.821 "write": true, 00:27:16.821 "unmap": false, 00:27:16.821 "write_zeroes": true, 00:27:16.821 "flush": false, 00:27:16.821 "reset": true, 00:27:16.821 "compare": false, 00:27:16.821 "compare_and_write": false, 00:27:16.821 "abort": false, 00:27:16.821 "nvme_admin": false, 00:27:16.821 "nvme_io": false 00:27:16.821 }, 00:27:16.821 "memory_domains": [ 00:27:16.821 { 00:27:16.821 "dma_device_id": "system", 00:27:16.821 "dma_device_type": 1 00:27:16.821 }, 00:27:16.821 { 00:27:16.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.821 "dma_device_type": 2 00:27:16.821 }, 00:27:16.821 { 00:27:16.821 "dma_device_id": "system", 00:27:16.821 "dma_device_type": 1 00:27:16.821 }, 00:27:16.821 { 00:27:16.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.821 "dma_device_type": 2 00:27:16.821 } 00:27:16.821 ], 00:27:16.821 "driver_specific": { 00:27:16.821 "raid": { 00:27:16.821 "uuid": "28aeb62e-4e16-4356-8b7a-9688cffcb1cd", 00:27:16.821 "strip_size_kb": 0, 00:27:16.821 "state": "online", 00:27:16.821 "raid_level": "raid1", 00:27:16.821 "superblock": true, 00:27:16.821 "num_base_bdevs": 2, 00:27:16.821 "num_base_bdevs_discovered": 2, 00:27:16.821 "num_base_bdevs_operational": 2, 00:27:16.821 "base_bdevs_list": [ 00:27:16.821 { 00:27:16.821 "name": "BaseBdev1", 00:27:16.821 "uuid": "1f5afc1c-5409-49da-b2c5-c3c0231fb33d", 00:27:16.821 "is_configured": true, 00:27:16.821 "data_offset": 256, 00:27:16.821 "data_size": 7936 00:27:16.821 }, 00:27:16.821 { 00:27:16.821 "name": "BaseBdev2", 00:27:16.821 "uuid": "05bc0f78-269e-4a35-81a4-aaa1accfa416", 00:27:16.821 "is_configured": true, 00:27:16.821 "data_offset": 256, 00:27:16.821 "data_size": 7936 00:27:16.821 } 00:27:16.821 ] 00:27:16.821 } 00:27:16.821 } 00:27:16.821 }' 00:27:16.821 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:16.821 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:16.821 BaseBdev2' 00:27:16.821 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:16.821 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:16.821 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:17.080 "name": "BaseBdev1", 00:27:17.080 "aliases": [ 00:27:17.080 "1f5afc1c-5409-49da-b2c5-c3c0231fb33d" 00:27:17.080 ], 00:27:17.080 "product_name": "Malloc disk", 00:27:17.080 "block_size": 4096, 00:27:17.080 "num_blocks": 8192, 00:27:17.080 "uuid": "1f5afc1c-5409-49da-b2c5-c3c0231fb33d", 00:27:17.080 "assigned_rate_limits": { 00:27:17.080 "rw_ios_per_sec": 0, 00:27:17.080 "rw_mbytes_per_sec": 0, 00:27:17.080 "r_mbytes_per_sec": 0, 00:27:17.080 "w_mbytes_per_sec": 0 00:27:17.080 }, 00:27:17.080 "claimed": true, 00:27:17.080 "claim_type": "exclusive_write", 00:27:17.080 "zoned": false, 00:27:17.080 "supported_io_types": { 00:27:17.080 "read": true, 00:27:17.080 "write": true, 00:27:17.080 "unmap": true, 00:27:17.080 "write_zeroes": true, 00:27:17.080 "flush": true, 00:27:17.080 "reset": true, 00:27:17.080 "compare": false, 00:27:17.080 "compare_and_write": false, 00:27:17.080 "abort": true, 00:27:17.080 "nvme_admin": false, 00:27:17.080 "nvme_io": false 00:27:17.080 }, 00:27:17.080 "memory_domains": [ 00:27:17.080 { 00:27:17.080 "dma_device_id": "system", 00:27:17.080 "dma_device_type": 1 00:27:17.080 }, 00:27:17.080 { 00:27:17.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.080 "dma_device_type": 2 00:27:17.080 } 00:27:17.080 ], 00:27:17.080 "driver_specific": {} 00:27:17.080 }' 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:17.080 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:17.339 19:11:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:17.596 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:17.596 "name": "BaseBdev2", 00:27:17.596 "aliases": [ 00:27:17.596 "05bc0f78-269e-4a35-81a4-aaa1accfa416" 00:27:17.596 ], 00:27:17.596 "product_name": "Malloc disk", 00:27:17.596 "block_size": 4096, 00:27:17.596 "num_blocks": 8192, 00:27:17.596 "uuid": "05bc0f78-269e-4a35-81a4-aaa1accfa416", 00:27:17.596 "assigned_rate_limits": { 00:27:17.596 "rw_ios_per_sec": 0, 00:27:17.596 "rw_mbytes_per_sec": 0, 00:27:17.596 "r_mbytes_per_sec": 0, 00:27:17.596 "w_mbytes_per_sec": 0 00:27:17.596 }, 00:27:17.596 "claimed": true, 00:27:17.596 "claim_type": "exclusive_write", 00:27:17.596 "zoned": false, 00:27:17.596 "supported_io_types": { 00:27:17.596 "read": true, 00:27:17.596 "write": true, 00:27:17.596 "unmap": true, 00:27:17.596 "write_zeroes": true, 00:27:17.596 "flush": true, 00:27:17.596 "reset": true, 00:27:17.596 "compare": false, 00:27:17.596 "compare_and_write": false, 00:27:17.596 "abort": true, 00:27:17.596 "nvme_admin": false, 00:27:17.596 "nvme_io": false 00:27:17.596 }, 00:27:17.596 "memory_domains": [ 00:27:17.596 { 00:27:17.596 "dma_device_id": "system", 00:27:17.596 "dma_device_type": 1 00:27:17.596 }, 00:27:17.596 { 00:27:17.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.596 "dma_device_type": 2 00:27:17.596 } 00:27:17.596 ], 00:27:17.596 "driver_specific": {} 00:27:17.596 }' 00:27:17.596 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:17.596 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:17.596 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:17.596 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.596 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:17.854 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:18.112 [2024-06-10 19:11:32.770176] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.112 19:11:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:18.371 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.371 "name": "Existed_Raid", 00:27:18.371 "uuid": "28aeb62e-4e16-4356-8b7a-9688cffcb1cd", 00:27:18.371 "strip_size_kb": 0, 00:27:18.371 "state": "online", 00:27:18.371 "raid_level": "raid1", 00:27:18.371 "superblock": true, 00:27:18.371 "num_base_bdevs": 2, 00:27:18.371 "num_base_bdevs_discovered": 1, 00:27:18.371 "num_base_bdevs_operational": 1, 00:27:18.371 "base_bdevs_list": [ 00:27:18.371 { 00:27:18.371 "name": null, 00:27:18.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.371 "is_configured": false, 00:27:18.371 "data_offset": 256, 00:27:18.371 "data_size": 7936 00:27:18.371 }, 00:27:18.371 { 00:27:18.371 "name": "BaseBdev2", 00:27:18.371 "uuid": "05bc0f78-269e-4a35-81a4-aaa1accfa416", 00:27:18.371 "is_configured": true, 00:27:18.371 "data_offset": 256, 00:27:18.371 "data_size": 7936 00:27:18.371 } 00:27:18.371 ] 00:27:18.371 }' 00:27:18.371 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.371 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:18.991 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:18.991 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.991 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:19.261 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:19.261 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:19.261 19:11:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:19.261 [2024-06-10 19:11:34.006472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:19.261 [2024-06-10 19:11:34.006549] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.261 [2024-06-10 19:11:34.017115] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.261 [2024-06-10 19:11:34.017145] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.261 [2024-06-10 19:11:34.017156] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10ba5f0 name Existed_Raid, state offline 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 1784078 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@949 -- # '[' -z 1784078 ']' 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # kill -0 1784078 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # uname 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:19.520 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1784078 00:27:19.778 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:19.778 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:19.778 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1784078' 00:27:19.778 killing process with pid 1784078 00:27:19.778 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # kill 1784078 00:27:19.778 [2024-06-10 19:11:34.321830] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.778 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # wait 1784078 00:27:19.778 [2024-06-10 19:11:34.322683] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.779 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:27:19.779 00:27:19.779 real 0m9.527s 00:27:19.779 user 0m16.901s 00:27:19.779 sys 0m1.818s 00:27:19.779 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:19.779 19:11:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.779 ************************************ 00:27:19.779 END TEST raid_state_function_test_sb_4k 00:27:19.779 ************************************ 00:27:20.038 19:11:34 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:27:20.038 19:11:34 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:27:20.038 19:11:34 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:20.038 19:11:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:20.038 ************************************ 00:27:20.038 START TEST raid_superblock_test_4k 00:27:20.038 ************************************ 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=1785905 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 1785905 /var/tmp/spdk-raid.sock 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@830 -- # '[' -z 1785905 ']' 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:20.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:20.038 19:11:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.038 [2024-06-10 19:11:34.652151] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:27:20.038 [2024-06-10 19:11:34.652205] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785905 ] 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.0 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.1 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.2 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.3 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.4 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.5 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.6 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:01.7 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.0 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.1 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.2 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.3 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.4 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.5 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.6 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b6:02.7 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.0 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.1 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.2 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.3 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.4 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.5 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.6 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:01.7 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.0 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.1 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.2 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.3 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.4 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.5 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.6 cannot be used 00:27:20.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.038 EAL: Requested device 0000:b8:02.7 cannot be used 00:27:20.038 [2024-06-10 19:11:34.785001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.297 [2024-06-10 19:11:34.871226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.297 [2024-06-10 19:11:34.923608] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.297 [2024-06-10 19:11:34.923641] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@863 -- # return 0 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:20.864 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:27:21.122 malloc1 00:27:21.122 19:11:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:21.381 [2024-06-10 19:11:35.995087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:21.381 [2024-06-10 19:11:35.995130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.381 [2024-06-10 19:11:35.995148] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1873b70 00:27:21.381 [2024-06-10 19:11:35.995159] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.381 [2024-06-10 19:11:35.996669] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.381 [2024-06-10 19:11:35.996695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:21.381 pt1 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:21.381 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:27:21.639 malloc2 00:27:21.639 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:21.898 [2024-06-10 19:11:36.452811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:21.898 [2024-06-10 19:11:36.452852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.898 [2024-06-10 19:11:36.452867] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1874f70 00:27:21.898 [2024-06-10 19:11:36.452879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.898 [2024-06-10 19:11:36.454288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.898 [2024-06-10 19:11:36.454313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:21.898 pt2 00:27:21.898 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:21.898 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:21.898 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:22.157 [2024-06-10 19:11:36.681425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:22.157 [2024-06-10 19:11:36.682593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:22.157 [2024-06-10 19:11:36.682724] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a17870 00:27:22.157 [2024-06-10 19:11:36.682736] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:22.157 [2024-06-10 19:11:36.682911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a0d290 00:27:22.157 [2024-06-10 19:11:36.683048] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a17870 00:27:22.157 [2024-06-10 19:11:36.683058] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a17870 00:27:22.157 [2024-06-10 19:11:36.683146] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.157 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.416 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.416 "name": "raid_bdev1", 00:27:22.416 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:22.416 "strip_size_kb": 0, 00:27:22.416 "state": "online", 00:27:22.416 "raid_level": "raid1", 00:27:22.416 "superblock": true, 00:27:22.416 "num_base_bdevs": 2, 00:27:22.416 "num_base_bdevs_discovered": 2, 00:27:22.416 "num_base_bdevs_operational": 2, 00:27:22.416 "base_bdevs_list": [ 00:27:22.416 { 00:27:22.416 "name": "pt1", 00:27:22.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:22.416 "is_configured": true, 00:27:22.416 "data_offset": 256, 00:27:22.416 "data_size": 7936 00:27:22.416 }, 00:27:22.416 { 00:27:22.416 "name": "pt2", 00:27:22.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:22.416 "is_configured": true, 00:27:22.416 "data_offset": 256, 00:27:22.416 "data_size": 7936 00:27:22.416 } 00:27:22.416 ] 00:27:22.416 }' 00:27:22.416 19:11:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.416 19:11:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:22.985 [2024-06-10 19:11:37.720328] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:22.985 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:22.985 "name": "raid_bdev1", 00:27:22.985 "aliases": [ 00:27:22.985 "6b329c58-397c-409f-9e7b-337989d7f3ad" 00:27:22.985 ], 00:27:22.985 "product_name": "Raid Volume", 00:27:22.985 "block_size": 4096, 00:27:22.985 "num_blocks": 7936, 00:27:22.985 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:22.985 "assigned_rate_limits": { 00:27:22.985 "rw_ios_per_sec": 0, 00:27:22.985 "rw_mbytes_per_sec": 0, 00:27:22.985 "r_mbytes_per_sec": 0, 00:27:22.985 "w_mbytes_per_sec": 0 00:27:22.985 }, 00:27:22.985 "claimed": false, 00:27:22.985 "zoned": false, 00:27:22.985 "supported_io_types": { 00:27:22.985 "read": true, 00:27:22.985 "write": true, 00:27:22.985 "unmap": false, 00:27:22.985 "write_zeroes": true, 00:27:22.985 "flush": false, 00:27:22.985 "reset": true, 00:27:22.985 "compare": false, 00:27:22.985 "compare_and_write": false, 00:27:22.985 "abort": false, 00:27:22.985 "nvme_admin": false, 00:27:22.985 "nvme_io": false 00:27:22.985 }, 00:27:22.985 "memory_domains": [ 00:27:22.985 { 00:27:22.985 "dma_device_id": "system", 00:27:22.985 "dma_device_type": 1 00:27:22.985 }, 00:27:22.985 { 00:27:22.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.985 "dma_device_type": 2 00:27:22.985 }, 00:27:22.985 { 00:27:22.985 "dma_device_id": "system", 00:27:22.985 "dma_device_type": 1 00:27:22.985 }, 00:27:22.985 { 00:27:22.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.985 "dma_device_type": 2 00:27:22.985 } 00:27:22.985 ], 00:27:22.985 "driver_specific": { 00:27:22.985 "raid": { 00:27:22.985 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:22.985 "strip_size_kb": 0, 00:27:22.985 "state": "online", 00:27:22.985 "raid_level": "raid1", 00:27:22.985 "superblock": true, 00:27:22.985 "num_base_bdevs": 2, 00:27:22.985 "num_base_bdevs_discovered": 2, 00:27:22.985 "num_base_bdevs_operational": 2, 00:27:22.985 "base_bdevs_list": [ 00:27:22.985 { 00:27:22.985 "name": "pt1", 00:27:22.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:22.985 "is_configured": true, 00:27:22.985 "data_offset": 256, 00:27:22.985 "data_size": 7936 00:27:22.985 }, 00:27:22.985 { 00:27:22.985 "name": "pt2", 00:27:22.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:22.985 "is_configured": true, 00:27:22.985 "data_offset": 256, 00:27:22.985 "data_size": 7936 00:27:22.985 } 00:27:22.985 ] 00:27:22.985 } 00:27:22.985 } 00:27:22.985 }' 00:27:23.244 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:23.244 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:23.244 pt2' 00:27:23.244 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:23.244 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:23.244 19:11:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:23.502 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:23.502 "name": "pt1", 00:27:23.502 "aliases": [ 00:27:23.502 "00000000-0000-0000-0000-000000000001" 00:27:23.502 ], 00:27:23.502 "product_name": "passthru", 00:27:23.502 "block_size": 4096, 00:27:23.502 "num_blocks": 8192, 00:27:23.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:23.502 "assigned_rate_limits": { 00:27:23.502 "rw_ios_per_sec": 0, 00:27:23.502 "rw_mbytes_per_sec": 0, 00:27:23.502 "r_mbytes_per_sec": 0, 00:27:23.502 "w_mbytes_per_sec": 0 00:27:23.502 }, 00:27:23.502 "claimed": true, 00:27:23.502 "claim_type": "exclusive_write", 00:27:23.502 "zoned": false, 00:27:23.502 "supported_io_types": { 00:27:23.502 "read": true, 00:27:23.502 "write": true, 00:27:23.502 "unmap": true, 00:27:23.502 "write_zeroes": true, 00:27:23.502 "flush": true, 00:27:23.502 "reset": true, 00:27:23.502 "compare": false, 00:27:23.502 "compare_and_write": false, 00:27:23.502 "abort": true, 00:27:23.502 "nvme_admin": false, 00:27:23.502 "nvme_io": false 00:27:23.502 }, 00:27:23.502 "memory_domains": [ 00:27:23.502 { 00:27:23.502 "dma_device_id": "system", 00:27:23.502 "dma_device_type": 1 00:27:23.502 }, 00:27:23.502 { 00:27:23.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.502 "dma_device_type": 2 00:27:23.502 } 00:27:23.502 ], 00:27:23.502 "driver_specific": { 00:27:23.502 "passthru": { 00:27:23.502 "name": "pt1", 00:27:23.502 "base_bdev_name": "malloc1" 00:27:23.502 } 00:27:23.502 } 00:27:23.502 }' 00:27:23.502 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.502 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.502 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:23.502 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.502 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.503 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.503 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.503 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.761 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:23.761 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.761 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.761 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.761 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:23.762 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:23.762 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:24.021 "name": "pt2", 00:27:24.021 "aliases": [ 00:27:24.021 "00000000-0000-0000-0000-000000000002" 00:27:24.021 ], 00:27:24.021 "product_name": "passthru", 00:27:24.021 "block_size": 4096, 00:27:24.021 "num_blocks": 8192, 00:27:24.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:24.021 "assigned_rate_limits": { 00:27:24.021 "rw_ios_per_sec": 0, 00:27:24.021 "rw_mbytes_per_sec": 0, 00:27:24.021 "r_mbytes_per_sec": 0, 00:27:24.021 "w_mbytes_per_sec": 0 00:27:24.021 }, 00:27:24.021 "claimed": true, 00:27:24.021 "claim_type": "exclusive_write", 00:27:24.021 "zoned": false, 00:27:24.021 "supported_io_types": { 00:27:24.021 "read": true, 00:27:24.021 "write": true, 00:27:24.021 "unmap": true, 00:27:24.021 "write_zeroes": true, 00:27:24.021 "flush": true, 00:27:24.021 "reset": true, 00:27:24.021 "compare": false, 00:27:24.021 "compare_and_write": false, 00:27:24.021 "abort": true, 00:27:24.021 "nvme_admin": false, 00:27:24.021 "nvme_io": false 00:27:24.021 }, 00:27:24.021 "memory_domains": [ 00:27:24.021 { 00:27:24.021 "dma_device_id": "system", 00:27:24.021 "dma_device_type": 1 00:27:24.021 }, 00:27:24.021 { 00:27:24.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.021 "dma_device_type": 2 00:27:24.021 } 00:27:24.021 ], 00:27:24.021 "driver_specific": { 00:27:24.021 "passthru": { 00:27:24.021 "name": "pt2", 00:27:24.021 "base_bdev_name": "malloc2" 00:27:24.021 } 00:27:24.021 } 00:27:24.021 }' 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:24.021 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:24.281 19:11:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:24.540 [2024-06-10 19:11:39.124022] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.540 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6b329c58-397c-409f-9e7b-337989d7f3ad 00:27:24.540 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 6b329c58-397c-409f-9e7b-337989d7f3ad ']' 00:27:24.540 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:24.799 [2024-06-10 19:11:39.352425] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:24.799 [2024-06-10 19:11:39.352442] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.799 [2024-06-10 19:11:39.352490] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.800 [2024-06-10 19:11:39.352538] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:24.800 [2024-06-10 19:11:39.352548] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a17870 name raid_bdev1, state offline 00:27:24.800 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.800 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:25.059 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:25.059 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:25.059 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:25.059 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:25.317 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:25.317 19:11:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:25.317 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:25.317 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # local es=0 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:25.576 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:25.835 [2024-06-10 19:11:40.487366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:25.835 [2024-06-10 19:11:40.488629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:25.835 [2024-06-10 19:11:40.488677] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:25.835 [2024-06-10 19:11:40.488714] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:25.835 [2024-06-10 19:11:40.488731] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.835 [2024-06-10 19:11:40.488740] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1874010 name raid_bdev1, state configuring 00:27:25.835 request: 00:27:25.835 { 00:27:25.835 "name": "raid_bdev1", 00:27:25.835 "raid_level": "raid1", 00:27:25.835 "base_bdevs": [ 00:27:25.835 "malloc1", 00:27:25.835 "malloc2" 00:27:25.835 ], 00:27:25.835 "superblock": false, 00:27:25.835 "method": "bdev_raid_create", 00:27:25.835 "req_id": 1 00:27:25.835 } 00:27:25.835 Got JSON-RPC error response 00:27:25.835 response: 00:27:25.835 { 00:27:25.835 "code": -17, 00:27:25.835 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:25.835 } 00:27:25.835 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # es=1 00:27:25.835 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:25.835 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:25.835 19:11:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:25.835 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:25.835 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.094 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:26.094 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:26.094 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:26.353 [2024-06-10 19:11:40.928481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:26.353 [2024-06-10 19:11:40.928518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.353 [2024-06-10 19:11:40.928533] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a175f0 00:27:26.353 [2024-06-10 19:11:40.928544] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.353 [2024-06-10 19:11:40.929989] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.353 [2024-06-10 19:11:40.930015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:26.353 [2024-06-10 19:11:40.930074] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:26.353 [2024-06-10 19:11:40.930097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:26.353 pt1 00:27:26.353 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:26.353 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.353 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:26.353 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.353 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.354 19:11:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.613 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.613 "name": "raid_bdev1", 00:27:26.613 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:26.613 "strip_size_kb": 0, 00:27:26.613 "state": "configuring", 00:27:26.613 "raid_level": "raid1", 00:27:26.613 "superblock": true, 00:27:26.613 "num_base_bdevs": 2, 00:27:26.613 "num_base_bdevs_discovered": 1, 00:27:26.613 "num_base_bdevs_operational": 2, 00:27:26.613 "base_bdevs_list": [ 00:27:26.613 { 00:27:26.613 "name": "pt1", 00:27:26.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:26.613 "is_configured": true, 00:27:26.613 "data_offset": 256, 00:27:26.613 "data_size": 7936 00:27:26.613 }, 00:27:26.613 { 00:27:26.613 "name": null, 00:27:26.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:26.613 "is_configured": false, 00:27:26.613 "data_offset": 256, 00:27:26.613 "data_size": 7936 00:27:26.613 } 00:27:26.613 ] 00:27:26.613 }' 00:27:26.613 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.613 19:11:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:27.180 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:27:27.180 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:27.180 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:27.180 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:27.438 [2024-06-10 19:11:41.959203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:27.438 [2024-06-10 19:11:41.959244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.438 [2024-06-10 19:11:41.959261] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1873da0 00:27:27.438 [2024-06-10 19:11:41.959271] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.438 [2024-06-10 19:11:41.959571] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.438 [2024-06-10 19:11:41.959604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:27.438 [2024-06-10 19:11:41.959659] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:27.438 [2024-06-10 19:11:41.959677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:27.438 [2024-06-10 19:11:41.959766] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a0cb90 00:27:27.439 [2024-06-10 19:11:41.959776] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:27.439 [2024-06-10 19:11:41.959928] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x186d730 00:27:27.439 [2024-06-10 19:11:41.960039] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a0cb90 00:27:27.439 [2024-06-10 19:11:41.960048] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a0cb90 00:27:27.439 [2024-06-10 19:11:41.960133] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.439 pt2 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.439 19:11:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.697 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.697 "name": "raid_bdev1", 00:27:27.697 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:27.697 "strip_size_kb": 0, 00:27:27.697 "state": "online", 00:27:27.697 "raid_level": "raid1", 00:27:27.697 "superblock": true, 00:27:27.697 "num_base_bdevs": 2, 00:27:27.697 "num_base_bdevs_discovered": 2, 00:27:27.697 "num_base_bdevs_operational": 2, 00:27:27.697 "base_bdevs_list": [ 00:27:27.697 { 00:27:27.697 "name": "pt1", 00:27:27.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:27.697 "is_configured": true, 00:27:27.697 "data_offset": 256, 00:27:27.697 "data_size": 7936 00:27:27.697 }, 00:27:27.697 { 00:27:27.697 "name": "pt2", 00:27:27.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:27.697 "is_configured": true, 00:27:27.697 "data_offset": 256, 00:27:27.697 "data_size": 7936 00:27:27.697 } 00:27:27.697 ] 00:27:27.697 }' 00:27:27.697 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.697 19:11:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:28.265 [2024-06-10 19:11:42.962046] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:28.265 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:28.265 "name": "raid_bdev1", 00:27:28.265 "aliases": [ 00:27:28.265 "6b329c58-397c-409f-9e7b-337989d7f3ad" 00:27:28.265 ], 00:27:28.265 "product_name": "Raid Volume", 00:27:28.265 "block_size": 4096, 00:27:28.265 "num_blocks": 7936, 00:27:28.265 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:28.265 "assigned_rate_limits": { 00:27:28.265 "rw_ios_per_sec": 0, 00:27:28.265 "rw_mbytes_per_sec": 0, 00:27:28.265 "r_mbytes_per_sec": 0, 00:27:28.265 "w_mbytes_per_sec": 0 00:27:28.265 }, 00:27:28.265 "claimed": false, 00:27:28.265 "zoned": false, 00:27:28.265 "supported_io_types": { 00:27:28.265 "read": true, 00:27:28.265 "write": true, 00:27:28.265 "unmap": false, 00:27:28.265 "write_zeroes": true, 00:27:28.265 "flush": false, 00:27:28.265 "reset": true, 00:27:28.265 "compare": false, 00:27:28.265 "compare_and_write": false, 00:27:28.265 "abort": false, 00:27:28.265 "nvme_admin": false, 00:27:28.265 "nvme_io": false 00:27:28.265 }, 00:27:28.265 "memory_domains": [ 00:27:28.265 { 00:27:28.266 "dma_device_id": "system", 00:27:28.266 "dma_device_type": 1 00:27:28.266 }, 00:27:28.266 { 00:27:28.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.266 "dma_device_type": 2 00:27:28.266 }, 00:27:28.266 { 00:27:28.266 "dma_device_id": "system", 00:27:28.266 "dma_device_type": 1 00:27:28.266 }, 00:27:28.266 { 00:27:28.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.266 "dma_device_type": 2 00:27:28.266 } 00:27:28.266 ], 00:27:28.266 "driver_specific": { 00:27:28.266 "raid": { 00:27:28.266 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:28.266 "strip_size_kb": 0, 00:27:28.266 "state": "online", 00:27:28.266 "raid_level": "raid1", 00:27:28.266 "superblock": true, 00:27:28.266 "num_base_bdevs": 2, 00:27:28.266 "num_base_bdevs_discovered": 2, 00:27:28.266 "num_base_bdevs_operational": 2, 00:27:28.266 "base_bdevs_list": [ 00:27:28.266 { 00:27:28.266 "name": "pt1", 00:27:28.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:28.266 "is_configured": true, 00:27:28.266 "data_offset": 256, 00:27:28.266 "data_size": 7936 00:27:28.266 }, 00:27:28.266 { 00:27:28.266 "name": "pt2", 00:27:28.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:28.266 "is_configured": true, 00:27:28.266 "data_offset": 256, 00:27:28.266 "data_size": 7936 00:27:28.266 } 00:27:28.266 ] 00:27:28.266 } 00:27:28.266 } 00:27:28.266 }' 00:27:28.266 19:11:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:28.525 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:28.525 pt2' 00:27:28.525 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:28.525 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:28.525 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:28.525 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:28.525 "name": "pt1", 00:27:28.525 "aliases": [ 00:27:28.525 "00000000-0000-0000-0000-000000000001" 00:27:28.525 ], 00:27:28.525 "product_name": "passthru", 00:27:28.525 "block_size": 4096, 00:27:28.525 "num_blocks": 8192, 00:27:28.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:28.525 "assigned_rate_limits": { 00:27:28.525 "rw_ios_per_sec": 0, 00:27:28.525 "rw_mbytes_per_sec": 0, 00:27:28.525 "r_mbytes_per_sec": 0, 00:27:28.525 "w_mbytes_per_sec": 0 00:27:28.525 }, 00:27:28.525 "claimed": true, 00:27:28.525 "claim_type": "exclusive_write", 00:27:28.525 "zoned": false, 00:27:28.525 "supported_io_types": { 00:27:28.525 "read": true, 00:27:28.525 "write": true, 00:27:28.525 "unmap": true, 00:27:28.525 "write_zeroes": true, 00:27:28.525 "flush": true, 00:27:28.525 "reset": true, 00:27:28.525 "compare": false, 00:27:28.525 "compare_and_write": false, 00:27:28.525 "abort": true, 00:27:28.525 "nvme_admin": false, 00:27:28.525 "nvme_io": false 00:27:28.525 }, 00:27:28.525 "memory_domains": [ 00:27:28.525 { 00:27:28.525 "dma_device_id": "system", 00:27:28.525 "dma_device_type": 1 00:27:28.525 }, 00:27:28.525 { 00:27:28.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.525 "dma_device_type": 2 00:27:28.525 } 00:27:28.525 ], 00:27:28.525 "driver_specific": { 00:27:28.525 "passthru": { 00:27:28.525 "name": "pt1", 00:27:28.525 "base_bdev_name": "malloc1" 00:27:28.525 } 00:27:28.525 } 00:27:28.525 }' 00:27:28.525 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:28.784 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.042 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.042 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:29.042 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:29.042 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:29.042 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:29.301 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:29.301 "name": "pt2", 00:27:29.301 "aliases": [ 00:27:29.301 "00000000-0000-0000-0000-000000000002" 00:27:29.301 ], 00:27:29.301 "product_name": "passthru", 00:27:29.301 "block_size": 4096, 00:27:29.301 "num_blocks": 8192, 00:27:29.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:29.301 "assigned_rate_limits": { 00:27:29.301 "rw_ios_per_sec": 0, 00:27:29.301 "rw_mbytes_per_sec": 0, 00:27:29.301 "r_mbytes_per_sec": 0, 00:27:29.301 "w_mbytes_per_sec": 0 00:27:29.301 }, 00:27:29.301 "claimed": true, 00:27:29.301 "claim_type": "exclusive_write", 00:27:29.301 "zoned": false, 00:27:29.301 "supported_io_types": { 00:27:29.301 "read": true, 00:27:29.301 "write": true, 00:27:29.301 "unmap": true, 00:27:29.301 "write_zeroes": true, 00:27:29.301 "flush": true, 00:27:29.301 "reset": true, 00:27:29.301 "compare": false, 00:27:29.301 "compare_and_write": false, 00:27:29.301 "abort": true, 00:27:29.301 "nvme_admin": false, 00:27:29.301 "nvme_io": false 00:27:29.301 }, 00:27:29.301 "memory_domains": [ 00:27:29.301 { 00:27:29.301 "dma_device_id": "system", 00:27:29.301 "dma_device_type": 1 00:27:29.301 }, 00:27:29.301 { 00:27:29.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.301 "dma_device_type": 2 00:27:29.301 } 00:27:29.301 ], 00:27:29.301 "driver_specific": { 00:27:29.301 "passthru": { 00:27:29.301 "name": "pt2", 00:27:29.301 "base_bdev_name": "malloc2" 00:27:29.301 } 00:27:29.301 } 00:27:29.301 }' 00:27:29.301 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:29.301 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:29.301 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:29.301 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:29.301 19:11:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:29.301 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:29.301 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:29.560 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:29.819 [2024-06-10 19:11:44.405838] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:29.819 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 6b329c58-397c-409f-9e7b-337989d7f3ad '!=' 6b329c58-397c-409f-9e7b-337989d7f3ad ']' 00:27:29.819 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:27:29.819 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:29.819 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:27:29.819 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:30.078 [2024-06-10 19:11:44.634265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.078 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.336 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.336 "name": "raid_bdev1", 00:27:30.336 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:30.336 "strip_size_kb": 0, 00:27:30.336 "state": "online", 00:27:30.336 "raid_level": "raid1", 00:27:30.336 "superblock": true, 00:27:30.336 "num_base_bdevs": 2, 00:27:30.336 "num_base_bdevs_discovered": 1, 00:27:30.336 "num_base_bdevs_operational": 1, 00:27:30.336 "base_bdevs_list": [ 00:27:30.336 { 00:27:30.336 "name": null, 00:27:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.336 "is_configured": false, 00:27:30.336 "data_offset": 256, 00:27:30.336 "data_size": 7936 00:27:30.336 }, 00:27:30.336 { 00:27:30.336 "name": "pt2", 00:27:30.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.337 "is_configured": true, 00:27:30.337 "data_offset": 256, 00:27:30.337 "data_size": 7936 00:27:30.337 } 00:27:30.337 ] 00:27:30.337 }' 00:27:30.337 19:11:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.337 19:11:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:30.904 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:31.163 [2024-06-10 19:11:45.664959] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:31.163 [2024-06-10 19:11:45.664986] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:31.163 [2024-06-10 19:11:45.665030] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:31.163 [2024-06-10 19:11:45.665067] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:31.163 [2024-06-10 19:11:45.665078] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a0cb90 name raid_bdev1, state offline 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:31.163 19:11:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:31.422 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:31.422 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:31.422 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:31.422 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:31.422 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:27:31.422 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:31.681 [2024-06-10 19:11:46.330682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:31.681 [2024-06-10 19:11:46.330723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.681 [2024-06-10 19:11:46.330739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1874760 00:27:31.681 [2024-06-10 19:11:46.330750] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.681 [2024-06-10 19:11:46.332239] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.681 [2024-06-10 19:11:46.332265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:31.681 [2024-06-10 19:11:46.332323] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:31.681 [2024-06-10 19:11:46.332346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:31.681 [2024-06-10 19:11:46.332421] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x186a4c0 00:27:31.681 [2024-06-10 19:11:46.332430] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:31.681 [2024-06-10 19:11:46.332595] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x186ad90 00:27:31.681 [2024-06-10 19:11:46.332706] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x186a4c0 00:27:31.681 [2024-06-10 19:11:46.332715] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x186a4c0 00:27:31.681 [2024-06-10 19:11:46.332803] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.681 pt2 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.681 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.939 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.939 "name": "raid_bdev1", 00:27:31.939 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:31.939 "strip_size_kb": 0, 00:27:31.939 "state": "online", 00:27:31.939 "raid_level": "raid1", 00:27:31.939 "superblock": true, 00:27:31.939 "num_base_bdevs": 2, 00:27:31.939 "num_base_bdevs_discovered": 1, 00:27:31.939 "num_base_bdevs_operational": 1, 00:27:31.939 "base_bdevs_list": [ 00:27:31.939 { 00:27:31.939 "name": null, 00:27:31.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.939 "is_configured": false, 00:27:31.939 "data_offset": 256, 00:27:31.939 "data_size": 7936 00:27:31.939 }, 00:27:31.939 { 00:27:31.939 "name": "pt2", 00:27:31.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.939 "is_configured": true, 00:27:31.939 "data_offset": 256, 00:27:31.939 "data_size": 7936 00:27:31.939 } 00:27:31.939 ] 00:27:31.939 }' 00:27:31.939 19:11:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.939 19:11:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:32.507 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:32.766 [2024-06-10 19:11:47.353363] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:32.766 [2024-06-10 19:11:47.353384] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:32.766 [2024-06-10 19:11:47.353430] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:32.766 [2024-06-10 19:11:47.353466] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:32.766 [2024-06-10 19:11:47.353477] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x186a4c0 name raid_bdev1, state offline 00:27:32.766 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.766 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:33.025 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:33.025 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:33.025 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:27:33.025 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:33.284 [2024-06-10 19:11:47.806544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:33.284 [2024-06-10 19:11:47.806588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.284 [2024-06-10 19:11:47.806603] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a0b540 00:27:33.284 [2024-06-10 19:11:47.806614] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.284 [2024-06-10 19:11:47.808096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.284 [2024-06-10 19:11:47.808122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:33.284 [2024-06-10 19:11:47.808181] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:33.284 [2024-06-10 19:11:47.808205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:33.284 [2024-06-10 19:11:47.808301] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:33.284 [2024-06-10 19:11:47.808314] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.284 [2024-06-10 19:11:47.808325] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x186f070 name raid_bdev1, state configuring 00:27:33.284 [2024-06-10 19:11:47.808347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.284 [2024-06-10 19:11:47.808396] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x186ba50 00:27:33.284 [2024-06-10 19:11:47.808405] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:33.284 [2024-06-10 19:11:47.808556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a175b0 00:27:33.284 [2024-06-10 19:11:47.808671] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x186ba50 00:27:33.284 [2024-06-10 19:11:47.808680] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x186ba50 00:27:33.284 [2024-06-10 19:11:47.808770] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.284 pt1 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.284 19:11:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.543 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.543 "name": "raid_bdev1", 00:27:33.543 "uuid": "6b329c58-397c-409f-9e7b-337989d7f3ad", 00:27:33.543 "strip_size_kb": 0, 00:27:33.543 "state": "online", 00:27:33.543 "raid_level": "raid1", 00:27:33.543 "superblock": true, 00:27:33.543 "num_base_bdevs": 2, 00:27:33.543 "num_base_bdevs_discovered": 1, 00:27:33.543 "num_base_bdevs_operational": 1, 00:27:33.543 "base_bdevs_list": [ 00:27:33.543 { 00:27:33.543 "name": null, 00:27:33.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.543 "is_configured": false, 00:27:33.543 "data_offset": 256, 00:27:33.543 "data_size": 7936 00:27:33.543 }, 00:27:33.543 { 00:27:33.543 "name": "pt2", 00:27:33.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:33.543 "is_configured": true, 00:27:33.543 "data_offset": 256, 00:27:33.543 "data_size": 7936 00:27:33.543 } 00:27:33.543 ] 00:27:33.543 }' 00:27:33.543 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.543 19:11:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.111 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:34.111 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:34.111 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:34.111 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:34.111 19:11:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:34.370 [2024-06-10 19:11:49.062009] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 6b329c58-397c-409f-9e7b-337989d7f3ad '!=' 6b329c58-397c-409f-9e7b-337989d7f3ad ']' 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 1785905 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@949 -- # '[' -z 1785905 ']' 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # kill -0 1785905 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # uname 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:34.370 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1785905 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1785905' 00:27:34.629 killing process with pid 1785905 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # kill 1785905 00:27:34.629 [2024-06-10 19:11:49.139323] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:34.629 [2024-06-10 19:11:49.139372] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.629 [2024-06-10 19:11:49.139409] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.629 [2024-06-10 19:11:49.139420] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x186ba50 name raid_bdev1, state offline 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # wait 1785905 00:27:34.629 [2024-06-10 19:11:49.154796] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:27:34.629 00:27:34.629 real 0m14.745s 00:27:34.629 user 0m26.739s 00:27:34.629 sys 0m2.737s 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:34.629 19:11:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.629 ************************************ 00:27:34.629 END TEST raid_superblock_test_4k 00:27:34.629 ************************************ 00:27:34.889 19:11:49 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:27:34.889 19:11:49 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:27:34.889 19:11:49 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:27:34.889 19:11:49 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:34.889 19:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:34.889 ************************************ 00:27:34.889 START TEST raid_rebuild_test_sb_4k 00:27:34.889 ************************************ 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false true 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=1788624 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 1788624 /var/tmp/spdk-raid.sock 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@830 -- # '[' -z 1788624 ']' 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:34.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:34.889 19:11:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.889 [2024-06-10 19:11:49.492250] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:27:34.889 [2024-06-10 19:11:49.492306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788624 ] 00:27:34.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:34.889 Zero copy mechanism will not be used. 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.0 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.1 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.2 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.3 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.4 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.5 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.6 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:01.7 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.0 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.1 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.2 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.3 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.4 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.5 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.6 cannot be used 00:27:34.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.889 EAL: Requested device 0000:b6:02.7 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.0 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.1 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.2 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.3 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.4 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.5 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.6 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:01.7 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.0 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.1 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.2 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.3 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.4 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.5 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.6 cannot be used 00:27:34.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.890 EAL: Requested device 0000:b8:02.7 cannot be used 00:27:34.890 [2024-06-10 19:11:49.625969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.148 [2024-06-10 19:11:49.713529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.148 [2024-06-10 19:11:49.780591] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.148 [2024-06-10 19:11:49.780627] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.715 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:35.715 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@863 -- # return 0 00:27:35.715 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:35.715 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:27:35.974 BaseBdev1_malloc 00:27:35.974 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:36.233 [2024-06-10 19:11:50.825080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:36.233 [2024-06-10 19:11:50.825121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:36.233 [2024-06-10 19:11:50.825140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x132c200 00:27:36.233 [2024-06-10 19:11:50.825158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:36.233 [2024-06-10 19:11:50.826665] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:36.233 [2024-06-10 19:11:50.826691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:36.233 BaseBdev1 00:27:36.233 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:36.233 19:11:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:27:36.492 BaseBdev2_malloc 00:27:36.492 19:11:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:36.752 [2024-06-10 19:11:51.266751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:36.752 [2024-06-10 19:11:51.266791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:36.752 [2024-06-10 19:11:51.266808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c3d90 00:27:36.752 [2024-06-10 19:11:51.266819] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:36.752 [2024-06-10 19:11:51.268192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:36.752 [2024-06-10 19:11:51.268219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:36.752 BaseBdev2 00:27:36.752 19:11:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:27:36.752 spare_malloc 00:27:36.752 19:11:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:37.012 spare_delay 00:27:37.012 19:11:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:37.271 [2024-06-10 19:11:51.936739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:37.271 [2024-06-10 19:11:51.936778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.271 [2024-06-10 19:11:51.936797] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1324b80 00:27:37.271 [2024-06-10 19:11:51.936809] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.271 [2024-06-10 19:11:51.938186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.271 [2024-06-10 19:11:51.938213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:37.271 spare 00:27:37.271 19:11:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:37.544 [2024-06-10 19:11:52.161353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:37.544 [2024-06-10 19:11:52.162505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:37.544 [2024-06-10 19:11:52.162665] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1325d90 00:27:37.544 [2024-06-10 19:11:52.162679] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:37.544 [2024-06-10 19:11:52.162853] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c6ee0 00:27:37.544 [2024-06-10 19:11:52.162978] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1325d90 00:27:37.544 [2024-06-10 19:11:52.162988] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1325d90 00:27:37.544 [2024-06-10 19:11:52.163076] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.544 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.823 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.823 "name": "raid_bdev1", 00:27:37.823 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:37.823 "strip_size_kb": 0, 00:27:37.823 "state": "online", 00:27:37.823 "raid_level": "raid1", 00:27:37.823 "superblock": true, 00:27:37.823 "num_base_bdevs": 2, 00:27:37.823 "num_base_bdevs_discovered": 2, 00:27:37.823 "num_base_bdevs_operational": 2, 00:27:37.823 "base_bdevs_list": [ 00:27:37.823 { 00:27:37.823 "name": "BaseBdev1", 00:27:37.823 "uuid": "88b4bcfa-638e-5266-8700-3121e0864b17", 00:27:37.823 "is_configured": true, 00:27:37.823 "data_offset": 256, 00:27:37.823 "data_size": 7936 00:27:37.823 }, 00:27:37.823 { 00:27:37.823 "name": "BaseBdev2", 00:27:37.823 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:37.823 "is_configured": true, 00:27:37.823 "data_offset": 256, 00:27:37.823 "data_size": 7936 00:27:37.823 } 00:27:37.823 ] 00:27:37.823 }' 00:27:37.823 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.823 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:38.391 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:38.391 19:11:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:38.651 [2024-06-10 19:11:53.184221] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:38.651 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:27:38.651 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.651 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:38.988 [2024-06-10 19:11:53.629247] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c6ee0 00:27:38.988 /dev/nbd0 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local i 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # break 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:38.988 1+0 records in 00:27:38.988 1+0 records out 00:27:38.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264811 s, 15.5 MB/s 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # size=4096 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # return 0 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:38.988 19:11:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:39.936 7936+0 records in 00:27:39.936 7936+0 records out 00:27:39.936 32505856 bytes (33 MB, 31 MiB) copied, 0.745848 s, 43.6 MB/s 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:39.936 [2024-06-10 19:11:54.636295] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.936 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:39.937 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:40.194 [2024-06-10 19:11:54.852912] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.194 19:11:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.453 19:11:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.453 "name": "raid_bdev1", 00:27:40.453 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:40.453 "strip_size_kb": 0, 00:27:40.453 "state": "online", 00:27:40.453 "raid_level": "raid1", 00:27:40.453 "superblock": true, 00:27:40.453 "num_base_bdevs": 2, 00:27:40.453 "num_base_bdevs_discovered": 1, 00:27:40.453 "num_base_bdevs_operational": 1, 00:27:40.453 "base_bdevs_list": [ 00:27:40.453 { 00:27:40.453 "name": null, 00:27:40.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.453 "is_configured": false, 00:27:40.453 "data_offset": 256, 00:27:40.453 "data_size": 7936 00:27:40.453 }, 00:27:40.453 { 00:27:40.453 "name": "BaseBdev2", 00:27:40.453 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:40.453 "is_configured": true, 00:27:40.453 "data_offset": 256, 00:27:40.453 "data_size": 7936 00:27:40.453 } 00:27:40.453 ] 00:27:40.453 }' 00:27:40.453 19:11:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.453 19:11:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:41.018 19:11:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:41.277 [2024-06-10 19:11:55.875609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:41.277 [2024-06-10 19:11:55.880309] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c5500 00:27:41.277 [2024-06-10 19:11:55.882366] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.277 19:11:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.214 19:11:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.472 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.472 "name": "raid_bdev1", 00:27:42.472 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:42.472 "strip_size_kb": 0, 00:27:42.472 "state": "online", 00:27:42.472 "raid_level": "raid1", 00:27:42.472 "superblock": true, 00:27:42.472 "num_base_bdevs": 2, 00:27:42.472 "num_base_bdevs_discovered": 2, 00:27:42.472 "num_base_bdevs_operational": 2, 00:27:42.472 "process": { 00:27:42.472 "type": "rebuild", 00:27:42.472 "target": "spare", 00:27:42.472 "progress": { 00:27:42.472 "blocks": 3072, 00:27:42.472 "percent": 38 00:27:42.472 } 00:27:42.472 }, 00:27:42.472 "base_bdevs_list": [ 00:27:42.472 { 00:27:42.472 "name": "spare", 00:27:42.472 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:42.472 "is_configured": true, 00:27:42.472 "data_offset": 256, 00:27:42.472 "data_size": 7936 00:27:42.472 }, 00:27:42.472 { 00:27:42.472 "name": "BaseBdev2", 00:27:42.472 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:42.472 "is_configured": true, 00:27:42.472 "data_offset": 256, 00:27:42.472 "data_size": 7936 00:27:42.472 } 00:27:42.472 ] 00:27:42.472 }' 00:27:42.472 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.472 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.472 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.472 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.472 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:42.730 [2024-06-10 19:11:57.424535] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.990 [2024-06-10 19:11:57.493991] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:42.990 [2024-06-10 19:11:57.494029] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.990 [2024-06-10 19:11:57.494042] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.990 [2024-06-10 19:11:57.494050] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.990 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.991 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.991 "name": "raid_bdev1", 00:27:42.991 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:42.991 "strip_size_kb": 0, 00:27:42.991 "state": "online", 00:27:42.991 "raid_level": "raid1", 00:27:42.991 "superblock": true, 00:27:42.991 "num_base_bdevs": 2, 00:27:42.991 "num_base_bdevs_discovered": 1, 00:27:42.991 "num_base_bdevs_operational": 1, 00:27:42.991 "base_bdevs_list": [ 00:27:42.991 { 00:27:42.991 "name": null, 00:27:42.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.991 "is_configured": false, 00:27:42.991 "data_offset": 256, 00:27:42.991 "data_size": 7936 00:27:42.991 }, 00:27:42.991 { 00:27:42.991 "name": "BaseBdev2", 00:27:42.991 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:42.991 "is_configured": true, 00:27:42.991 "data_offset": 256, 00:27:42.991 "data_size": 7936 00:27:42.991 } 00:27:42.991 ] 00:27:42.991 }' 00:27:42.991 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.991 19:11:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.557 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.815 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.815 "name": "raid_bdev1", 00:27:43.815 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:43.815 "strip_size_kb": 0, 00:27:43.815 "state": "online", 00:27:43.815 "raid_level": "raid1", 00:27:43.815 "superblock": true, 00:27:43.815 "num_base_bdevs": 2, 00:27:43.815 "num_base_bdevs_discovered": 1, 00:27:43.815 "num_base_bdevs_operational": 1, 00:27:43.815 "base_bdevs_list": [ 00:27:43.815 { 00:27:43.815 "name": null, 00:27:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.815 "is_configured": false, 00:27:43.815 "data_offset": 256, 00:27:43.815 "data_size": 7936 00:27:43.815 }, 00:27:43.815 { 00:27:43.815 "name": "BaseBdev2", 00:27:43.815 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:43.815 "is_configured": true, 00:27:43.815 "data_offset": 256, 00:27:43.815 "data_size": 7936 00:27:43.815 } 00:27:43.815 ] 00:27:43.815 }' 00:27:43.815 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.074 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:44.075 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.075 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:44.075 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:44.075 [2024-06-10 19:11:58.817667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:44.075 [2024-06-10 19:11:58.822362] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c5af0 00:27:44.075 [2024-06-10 19:11:58.823718] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:44.333 19:11:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.269 19:11:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.528 "name": "raid_bdev1", 00:27:45.528 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:45.528 "strip_size_kb": 0, 00:27:45.528 "state": "online", 00:27:45.528 "raid_level": "raid1", 00:27:45.528 "superblock": true, 00:27:45.528 "num_base_bdevs": 2, 00:27:45.528 "num_base_bdevs_discovered": 2, 00:27:45.528 "num_base_bdevs_operational": 2, 00:27:45.528 "process": { 00:27:45.528 "type": "rebuild", 00:27:45.528 "target": "spare", 00:27:45.528 "progress": { 00:27:45.528 "blocks": 3072, 00:27:45.528 "percent": 38 00:27:45.528 } 00:27:45.528 }, 00:27:45.528 "base_bdevs_list": [ 00:27:45.528 { 00:27:45.528 "name": "spare", 00:27:45.528 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:45.528 "is_configured": true, 00:27:45.528 "data_offset": 256, 00:27:45.528 "data_size": 7936 00:27:45.528 }, 00:27:45.528 { 00:27:45.528 "name": "BaseBdev2", 00:27:45.528 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:45.528 "is_configured": true, 00:27:45.528 "data_offset": 256, 00:27:45.528 "data_size": 7936 00:27:45.528 } 00:27:45.528 ] 00:27:45.528 }' 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:45.528 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=945 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.528 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.787 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.787 "name": "raid_bdev1", 00:27:45.787 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:45.787 "strip_size_kb": 0, 00:27:45.787 "state": "online", 00:27:45.787 "raid_level": "raid1", 00:27:45.787 "superblock": true, 00:27:45.787 "num_base_bdevs": 2, 00:27:45.787 "num_base_bdevs_discovered": 2, 00:27:45.787 "num_base_bdevs_operational": 2, 00:27:45.787 "process": { 00:27:45.787 "type": "rebuild", 00:27:45.787 "target": "spare", 00:27:45.787 "progress": { 00:27:45.787 "blocks": 3840, 00:27:45.787 "percent": 48 00:27:45.787 } 00:27:45.787 }, 00:27:45.787 "base_bdevs_list": [ 00:27:45.787 { 00:27:45.787 "name": "spare", 00:27:45.787 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:45.787 "is_configured": true, 00:27:45.787 "data_offset": 256, 00:27:45.787 "data_size": 7936 00:27:45.787 }, 00:27:45.787 { 00:27:45.787 "name": "BaseBdev2", 00:27:45.787 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:45.787 "is_configured": true, 00:27:45.787 "data_offset": 256, 00:27:45.787 "data_size": 7936 00:27:45.787 } 00:27:45.787 ] 00:27:45.787 }' 00:27:45.787 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.787 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.787 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.787 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.787 19:12:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.166 "name": "raid_bdev1", 00:27:47.166 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:47.166 "strip_size_kb": 0, 00:27:47.166 "state": "online", 00:27:47.166 "raid_level": "raid1", 00:27:47.166 "superblock": true, 00:27:47.166 "num_base_bdevs": 2, 00:27:47.166 "num_base_bdevs_discovered": 2, 00:27:47.166 "num_base_bdevs_operational": 2, 00:27:47.166 "process": { 00:27:47.166 "type": "rebuild", 00:27:47.166 "target": "spare", 00:27:47.166 "progress": { 00:27:47.166 "blocks": 7168, 00:27:47.166 "percent": 90 00:27:47.166 } 00:27:47.166 }, 00:27:47.166 "base_bdevs_list": [ 00:27:47.166 { 00:27:47.166 "name": "spare", 00:27:47.166 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:47.166 "is_configured": true, 00:27:47.166 "data_offset": 256, 00:27:47.166 "data_size": 7936 00:27:47.166 }, 00:27:47.166 { 00:27:47.166 "name": "BaseBdev2", 00:27:47.166 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:47.166 "is_configured": true, 00:27:47.166 "data_offset": 256, 00:27:47.166 "data_size": 7936 00:27:47.166 } 00:27:47.166 ] 00:27:47.166 }' 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.166 19:12:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.426 [2024-06-10 19:12:01.946779] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:47.426 [2024-06-10 19:12:01.946830] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:47.426 [2024-06-10 19:12:01.946902] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.362 19:12:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.362 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.362 "name": "raid_bdev1", 00:27:48.362 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:48.362 "strip_size_kb": 0, 00:27:48.362 "state": "online", 00:27:48.362 "raid_level": "raid1", 00:27:48.362 "superblock": true, 00:27:48.362 "num_base_bdevs": 2, 00:27:48.362 "num_base_bdevs_discovered": 2, 00:27:48.362 "num_base_bdevs_operational": 2, 00:27:48.362 "base_bdevs_list": [ 00:27:48.362 { 00:27:48.362 "name": "spare", 00:27:48.362 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:48.362 "is_configured": true, 00:27:48.362 "data_offset": 256, 00:27:48.362 "data_size": 7936 00:27:48.362 }, 00:27:48.362 { 00:27:48.362 "name": "BaseBdev2", 00:27:48.362 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:48.362 "is_configured": true, 00:27:48.362 "data_offset": 256, 00:27:48.362 "data_size": 7936 00:27:48.362 } 00:27:48.362 ] 00:27:48.362 }' 00:27:48.362 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.362 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:48.362 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.621 "name": "raid_bdev1", 00:27:48.621 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:48.621 "strip_size_kb": 0, 00:27:48.621 "state": "online", 00:27:48.621 "raid_level": "raid1", 00:27:48.621 "superblock": true, 00:27:48.621 "num_base_bdevs": 2, 00:27:48.621 "num_base_bdevs_discovered": 2, 00:27:48.621 "num_base_bdevs_operational": 2, 00:27:48.621 "base_bdevs_list": [ 00:27:48.621 { 00:27:48.621 "name": "spare", 00:27:48.621 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:48.621 "is_configured": true, 00:27:48.621 "data_offset": 256, 00:27:48.621 "data_size": 7936 00:27:48.621 }, 00:27:48.621 { 00:27:48.621 "name": "BaseBdev2", 00:27:48.621 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:48.621 "is_configured": true, 00:27:48.621 "data_offset": 256, 00:27:48.621 "data_size": 7936 00:27:48.621 } 00:27:48.621 ] 00:27:48.621 }' 00:27:48.621 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.881 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.140 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:49.140 "name": "raid_bdev1", 00:27:49.140 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:49.140 "strip_size_kb": 0, 00:27:49.140 "state": "online", 00:27:49.140 "raid_level": "raid1", 00:27:49.140 "superblock": true, 00:27:49.140 "num_base_bdevs": 2, 00:27:49.140 "num_base_bdevs_discovered": 2, 00:27:49.140 "num_base_bdevs_operational": 2, 00:27:49.140 "base_bdevs_list": [ 00:27:49.140 { 00:27:49.140 "name": "spare", 00:27:49.140 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:49.140 "is_configured": true, 00:27:49.140 "data_offset": 256, 00:27:49.140 "data_size": 7936 00:27:49.140 }, 00:27:49.140 { 00:27:49.140 "name": "BaseBdev2", 00:27:49.140 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:49.140 "is_configured": true, 00:27:49.140 "data_offset": 256, 00:27:49.140 "data_size": 7936 00:27:49.140 } 00:27:49.140 ] 00:27:49.140 }' 00:27:49.140 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:49.140 19:12:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:49.709 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:49.709 [2024-06-10 19:12:04.425613] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:49.709 [2024-06-10 19:12:04.425638] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:49.709 [2024-06-10 19:12:04.425688] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:49.709 [2024-06-10 19:12:04.425738] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:49.709 [2024-06-10 19:12:04.425749] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1325d90 name raid_bdev1, state offline 00:27:49.709 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.709 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:49.968 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:50.227 /dev/nbd0 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local i 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # break 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.227 1+0 records in 00:27:50.227 1+0 records out 00:27:50.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278344 s, 14.7 MB/s 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # size=4096 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # return 0 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:50.227 19:12:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:50.487 /dev/nbd1 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local i 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # break 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.487 1+0 records in 00:27:50.487 1+0 records out 00:27:50.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320419 s, 12.8 MB/s 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # size=4096 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # return 0 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:50.487 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:50.746 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:51.006 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:51.265 19:12:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:51.524 [2024-06-10 19:12:06.170139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:51.524 [2024-06-10 19:12:06.170181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.524 [2024-06-10 19:12:06.170198] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c5300 00:27:51.524 [2024-06-10 19:12:06.170210] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.524 [2024-06-10 19:12:06.171719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.524 [2024-06-10 19:12:06.171746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:51.524 [2024-06-10 19:12:06.171824] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:51.524 [2024-06-10 19:12:06.171847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:51.524 [2024-06-10 19:12:06.171939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:51.524 spare 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.524 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.525 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.525 [2024-06-10 19:12:06.272246] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x132b600 00:27:51.525 [2024-06-10 19:12:06.272261] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:51.525 [2024-06-10 19:12:06.272426] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1324530 00:27:51.525 [2024-06-10 19:12:06.272552] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x132b600 00:27:51.525 [2024-06-10 19:12:06.272562] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x132b600 00:27:51.525 [2024-06-10 19:12:06.272664] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.784 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.784 "name": "raid_bdev1", 00:27:51.784 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:51.784 "strip_size_kb": 0, 00:27:51.784 "state": "online", 00:27:51.784 "raid_level": "raid1", 00:27:51.784 "superblock": true, 00:27:51.784 "num_base_bdevs": 2, 00:27:51.784 "num_base_bdevs_discovered": 2, 00:27:51.784 "num_base_bdevs_operational": 2, 00:27:51.784 "base_bdevs_list": [ 00:27:51.784 { 00:27:51.784 "name": "spare", 00:27:51.784 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:51.784 "is_configured": true, 00:27:51.784 "data_offset": 256, 00:27:51.784 "data_size": 7936 00:27:51.784 }, 00:27:51.784 { 00:27:51.784 "name": "BaseBdev2", 00:27:51.784 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:51.784 "is_configured": true, 00:27:51.784 "data_offset": 256, 00:27:51.784 "data_size": 7936 00:27:51.784 } 00:27:51.784 ] 00:27:51.784 }' 00:27:51.784 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.784 19:12:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.352 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.610 "name": "raid_bdev1", 00:27:52.610 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:52.610 "strip_size_kb": 0, 00:27:52.610 "state": "online", 00:27:52.610 "raid_level": "raid1", 00:27:52.610 "superblock": true, 00:27:52.610 "num_base_bdevs": 2, 00:27:52.610 "num_base_bdevs_discovered": 2, 00:27:52.610 "num_base_bdevs_operational": 2, 00:27:52.610 "base_bdevs_list": [ 00:27:52.610 { 00:27:52.610 "name": "spare", 00:27:52.610 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:52.610 "is_configured": true, 00:27:52.610 "data_offset": 256, 00:27:52.610 "data_size": 7936 00:27:52.610 }, 00:27:52.610 { 00:27:52.610 "name": "BaseBdev2", 00:27:52.610 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:52.610 "is_configured": true, 00:27:52.610 "data_offset": 256, 00:27:52.610 "data_size": 7936 00:27:52.610 } 00:27:52.610 ] 00:27:52.610 }' 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.610 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:52.869 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.869 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:53.129 [2024-06-10 19:12:07.758410] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.129 19:12:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.388 19:12:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.388 "name": "raid_bdev1", 00:27:53.388 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:53.388 "strip_size_kb": 0, 00:27:53.388 "state": "online", 00:27:53.388 "raid_level": "raid1", 00:27:53.388 "superblock": true, 00:27:53.388 "num_base_bdevs": 2, 00:27:53.388 "num_base_bdevs_discovered": 1, 00:27:53.388 "num_base_bdevs_operational": 1, 00:27:53.388 "base_bdevs_list": [ 00:27:53.388 { 00:27:53.388 "name": null, 00:27:53.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.388 "is_configured": false, 00:27:53.388 "data_offset": 256, 00:27:53.388 "data_size": 7936 00:27:53.388 }, 00:27:53.388 { 00:27:53.388 "name": "BaseBdev2", 00:27:53.388 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:53.388 "is_configured": true, 00:27:53.388 "data_offset": 256, 00:27:53.388 "data_size": 7936 00:27:53.388 } 00:27:53.388 ] 00:27:53.388 }' 00:27:53.388 19:12:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.388 19:12:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:53.957 19:12:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:54.217 [2024-06-10 19:12:08.749031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.217 [2024-06-10 19:12:08.749159] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:54.217 [2024-06-10 19:12:08.749173] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:54.217 [2024-06-10 19:12:08.749198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.217 [2024-06-10 19:12:08.753819] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10291c0 00:27:54.217 [2024-06-10 19:12:08.755872] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:54.217 19:12:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.155 19:12:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.417 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:55.417 "name": "raid_bdev1", 00:27:55.417 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:55.417 "strip_size_kb": 0, 00:27:55.417 "state": "online", 00:27:55.417 "raid_level": "raid1", 00:27:55.417 "superblock": true, 00:27:55.417 "num_base_bdevs": 2, 00:27:55.417 "num_base_bdevs_discovered": 2, 00:27:55.417 "num_base_bdevs_operational": 2, 00:27:55.417 "process": { 00:27:55.417 "type": "rebuild", 00:27:55.417 "target": "spare", 00:27:55.417 "progress": { 00:27:55.417 "blocks": 3072, 00:27:55.417 "percent": 38 00:27:55.417 } 00:27:55.417 }, 00:27:55.417 "base_bdevs_list": [ 00:27:55.417 { 00:27:55.417 "name": "spare", 00:27:55.417 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:55.417 "is_configured": true, 00:27:55.417 "data_offset": 256, 00:27:55.417 "data_size": 7936 00:27:55.417 }, 00:27:55.417 { 00:27:55.417 "name": "BaseBdev2", 00:27:55.417 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:55.417 "is_configured": true, 00:27:55.417 "data_offset": 256, 00:27:55.417 "data_size": 7936 00:27:55.417 } 00:27:55.417 ] 00:27:55.417 }' 00:27:55.417 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:55.417 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:55.417 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:55.417 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:55.417 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:55.677 [2024-06-10 19:12:10.298075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.677 [2024-06-10 19:12:10.367521] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:55.677 [2024-06-10 19:12:10.367560] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.677 [2024-06-10 19:12:10.367574] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.677 [2024-06-10 19:12:10.367588] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.677 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.937 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.937 "name": "raid_bdev1", 00:27:55.937 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:55.937 "strip_size_kb": 0, 00:27:55.937 "state": "online", 00:27:55.937 "raid_level": "raid1", 00:27:55.937 "superblock": true, 00:27:55.937 "num_base_bdevs": 2, 00:27:55.937 "num_base_bdevs_discovered": 1, 00:27:55.937 "num_base_bdevs_operational": 1, 00:27:55.937 "base_bdevs_list": [ 00:27:55.937 { 00:27:55.937 "name": null, 00:27:55.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.937 "is_configured": false, 00:27:55.937 "data_offset": 256, 00:27:55.937 "data_size": 7936 00:27:55.937 }, 00:27:55.937 { 00:27:55.937 "name": "BaseBdev2", 00:27:55.937 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:55.937 "is_configured": true, 00:27:55.937 "data_offset": 256, 00:27:55.937 "data_size": 7936 00:27:55.937 } 00:27:55.937 ] 00:27:55.937 }' 00:27:55.937 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.937 19:12:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:56.505 19:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:56.765 [2024-06-10 19:12:11.418414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:56.765 [2024-06-10 19:12:11.418458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.765 [2024-06-10 19:12:11.418477] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1324db0 00:27:56.765 [2024-06-10 19:12:11.418488] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.765 [2024-06-10 19:12:11.418828] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.765 [2024-06-10 19:12:11.418845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:56.765 [2024-06-10 19:12:11.418915] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:56.765 [2024-06-10 19:12:11.418926] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:56.765 [2024-06-10 19:12:11.418935] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:56.765 [2024-06-10 19:12:11.418952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.765 [2024-06-10 19:12:11.423629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x132b9e0 00:27:56.765 [2024-06-10 19:12:11.425027] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:56.765 spare 00:27:56.765 19:12:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.702 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.962 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.962 "name": "raid_bdev1", 00:27:57.962 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:57.962 "strip_size_kb": 0, 00:27:57.962 "state": "online", 00:27:57.962 "raid_level": "raid1", 00:27:57.962 "superblock": true, 00:27:57.962 "num_base_bdevs": 2, 00:27:57.962 "num_base_bdevs_discovered": 2, 00:27:57.962 "num_base_bdevs_operational": 2, 00:27:57.962 "process": { 00:27:57.962 "type": "rebuild", 00:27:57.962 "target": "spare", 00:27:57.962 "progress": { 00:27:57.962 "blocks": 3072, 00:27:57.962 "percent": 38 00:27:57.962 } 00:27:57.962 }, 00:27:57.962 "base_bdevs_list": [ 00:27:57.962 { 00:27:57.962 "name": "spare", 00:27:57.962 "uuid": "873ac29f-e6b7-599b-9ac8-616e8ec1e586", 00:27:57.962 "is_configured": true, 00:27:57.962 "data_offset": 256, 00:27:57.962 "data_size": 7936 00:27:57.962 }, 00:27:57.962 { 00:27:57.962 "name": "BaseBdev2", 00:27:57.962 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:57.962 "is_configured": true, 00:27:57.962 "data_offset": 256, 00:27:57.962 "data_size": 7936 00:27:57.962 } 00:27:57.962 ] 00:27:57.962 }' 00:27:57.962 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:58.220 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.220 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:58.220 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:58.220 19:12:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:58.220 [2024-06-10 19:12:12.968093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.478 [2024-06-10 19:12:13.036650] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:58.478 [2024-06-10 19:12:13.036691] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.478 [2024-06-10 19:12:13.036705] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.478 [2024-06-10 19:12:13.036713] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.478 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.738 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.738 "name": "raid_bdev1", 00:27:58.738 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:58.738 "strip_size_kb": 0, 00:27:58.738 "state": "online", 00:27:58.738 "raid_level": "raid1", 00:27:58.738 "superblock": true, 00:27:58.738 "num_base_bdevs": 2, 00:27:58.738 "num_base_bdevs_discovered": 1, 00:27:58.738 "num_base_bdevs_operational": 1, 00:27:58.738 "base_bdevs_list": [ 00:27:58.738 { 00:27:58.738 "name": null, 00:27:58.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.738 "is_configured": false, 00:27:58.738 "data_offset": 256, 00:27:58.738 "data_size": 7936 00:27:58.738 }, 00:27:58.738 { 00:27:58.738 "name": "BaseBdev2", 00:27:58.738 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:58.738 "is_configured": true, 00:27:58.738 "data_offset": 256, 00:27:58.738 "data_size": 7936 00:27:58.738 } 00:27:58.738 ] 00:27:58.738 }' 00:27:58.738 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.738 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.305 19:12:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.563 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.563 "name": "raid_bdev1", 00:27:59.563 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:27:59.563 "strip_size_kb": 0, 00:27:59.563 "state": "online", 00:27:59.563 "raid_level": "raid1", 00:27:59.563 "superblock": true, 00:27:59.563 "num_base_bdevs": 2, 00:27:59.563 "num_base_bdevs_discovered": 1, 00:27:59.563 "num_base_bdevs_operational": 1, 00:27:59.563 "base_bdevs_list": [ 00:27:59.563 { 00:27:59.563 "name": null, 00:27:59.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.563 "is_configured": false, 00:27:59.563 "data_offset": 256, 00:27:59.563 "data_size": 7936 00:27:59.563 }, 00:27:59.563 { 00:27:59.563 "name": "BaseBdev2", 00:27:59.564 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:27:59.564 "is_configured": true, 00:27:59.564 "data_offset": 256, 00:27:59.564 "data_size": 7936 00:27:59.564 } 00:27:59.564 ] 00:27:59.564 }' 00:27:59.564 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.564 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:59.564 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.564 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:59.564 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:59.884 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:59.884 [2024-06-10 19:12:14.629080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:59.884 [2024-06-10 19:12:14.629122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:59.884 [2024-06-10 19:12:14.629139] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1323500 00:27:59.884 [2024-06-10 19:12:14.629151] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:59.884 [2024-06-10 19:12:14.629458] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:59.884 [2024-06-10 19:12:14.629474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:59.884 [2024-06-10 19:12:14.629530] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:59.884 [2024-06-10 19:12:14.629541] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:59.884 [2024-06-10 19:12:14.629550] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:59.884 BaseBdev1 00:28:00.143 19:12:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.081 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.340 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:01.340 "name": "raid_bdev1", 00:28:01.340 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:28:01.340 "strip_size_kb": 0, 00:28:01.340 "state": "online", 00:28:01.340 "raid_level": "raid1", 00:28:01.340 "superblock": true, 00:28:01.340 "num_base_bdevs": 2, 00:28:01.340 "num_base_bdevs_discovered": 1, 00:28:01.340 "num_base_bdevs_operational": 1, 00:28:01.340 "base_bdevs_list": [ 00:28:01.340 { 00:28:01.340 "name": null, 00:28:01.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.340 "is_configured": false, 00:28:01.340 "data_offset": 256, 00:28:01.340 "data_size": 7936 00:28:01.340 }, 00:28:01.340 { 00:28:01.340 "name": "BaseBdev2", 00:28:01.340 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:28:01.340 "is_configured": true, 00:28:01.340 "data_offset": 256, 00:28:01.340 "data_size": 7936 00:28:01.340 } 00:28:01.340 ] 00:28:01.340 }' 00:28:01.340 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:01.340 19:12:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.909 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.169 "name": "raid_bdev1", 00:28:02.169 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:28:02.169 "strip_size_kb": 0, 00:28:02.169 "state": "online", 00:28:02.169 "raid_level": "raid1", 00:28:02.169 "superblock": true, 00:28:02.169 "num_base_bdevs": 2, 00:28:02.169 "num_base_bdevs_discovered": 1, 00:28:02.169 "num_base_bdevs_operational": 1, 00:28:02.169 "base_bdevs_list": [ 00:28:02.169 { 00:28:02.169 "name": null, 00:28:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.169 "is_configured": false, 00:28:02.169 "data_offset": 256, 00:28:02.169 "data_size": 7936 00:28:02.169 }, 00:28:02.169 { 00:28:02.169 "name": "BaseBdev2", 00:28:02.169 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:28:02.169 "is_configured": true, 00:28:02.169 "data_offset": 256, 00:28:02.169 "data_size": 7936 00:28:02.169 } 00:28:02.169 ] 00:28:02.169 }' 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@649 -- # local es=0 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:02.169 19:12:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:02.428 [2024-06-10 19:12:16.991340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.428 [2024-06-10 19:12:16.991450] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:02.428 [2024-06-10 19:12:16.991463] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:02.428 request: 00:28:02.428 { 00:28:02.428 "raid_bdev": "raid_bdev1", 00:28:02.428 "base_bdev": "BaseBdev1", 00:28:02.428 "method": "bdev_raid_add_base_bdev", 00:28:02.428 "req_id": 1 00:28:02.428 } 00:28:02.428 Got JSON-RPC error response 00:28:02.428 response: 00:28:02.428 { 00:28:02.428 "code": -22, 00:28:02.428 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:02.428 } 00:28:02.428 19:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # es=1 00:28:02.428 19:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:02.428 19:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:02.428 19:12:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:02.428 19:12:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.366 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.625 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.625 "name": "raid_bdev1", 00:28:03.625 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:28:03.625 "strip_size_kb": 0, 00:28:03.625 "state": "online", 00:28:03.625 "raid_level": "raid1", 00:28:03.625 "superblock": true, 00:28:03.625 "num_base_bdevs": 2, 00:28:03.625 "num_base_bdevs_discovered": 1, 00:28:03.625 "num_base_bdevs_operational": 1, 00:28:03.625 "base_bdevs_list": [ 00:28:03.625 { 00:28:03.625 "name": null, 00:28:03.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.625 "is_configured": false, 00:28:03.625 "data_offset": 256, 00:28:03.625 "data_size": 7936 00:28:03.625 }, 00:28:03.625 { 00:28:03.625 "name": "BaseBdev2", 00:28:03.625 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:28:03.625 "is_configured": true, 00:28:03.625 "data_offset": 256, 00:28:03.625 "data_size": 7936 00:28:03.625 } 00:28:03.625 ] 00:28:03.625 }' 00:28:03.625 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.625 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.194 19:12:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.453 "name": "raid_bdev1", 00:28:04.453 "uuid": "a984a62c-7246-4113-94f6-2a20f1cf07f8", 00:28:04.453 "strip_size_kb": 0, 00:28:04.453 "state": "online", 00:28:04.453 "raid_level": "raid1", 00:28:04.453 "superblock": true, 00:28:04.453 "num_base_bdevs": 2, 00:28:04.453 "num_base_bdevs_discovered": 1, 00:28:04.453 "num_base_bdevs_operational": 1, 00:28:04.453 "base_bdevs_list": [ 00:28:04.453 { 00:28:04.453 "name": null, 00:28:04.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.453 "is_configured": false, 00:28:04.453 "data_offset": 256, 00:28:04.453 "data_size": 7936 00:28:04.453 }, 00:28:04.453 { 00:28:04.453 "name": "BaseBdev2", 00:28:04.453 "uuid": "e9a31cc1-14de-5a2a-b429-590a04106570", 00:28:04.453 "is_configured": true, 00:28:04.453 "data_offset": 256, 00:28:04.453 "data_size": 7936 00:28:04.453 } 00:28:04.453 ] 00:28:04.453 }' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 1788624 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@949 -- # '[' -z 1788624 ']' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # kill -0 1788624 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # uname 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1788624 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1788624' 00:28:04.453 killing process with pid 1788624 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # kill 1788624 00:28:04.453 Received shutdown signal, test time was about 60.000000 seconds 00:28:04.453 00:28:04.453 Latency(us) 00:28:04.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.453 =================================================================================================================== 00:28:04.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:04.453 [2024-06-10 19:12:19.198515] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:04.453 [2024-06-10 19:12:19.198594] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:04.453 [2024-06-10 19:12:19.198635] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:04.453 [2024-06-10 19:12:19.198646] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x132b600 name raid_bdev1, state offline 00:28:04.453 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # wait 1788624 00:28:04.713 [2024-06-10 19:12:19.222436] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:04.713 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:28:04.713 00:28:04.713 real 0m29.985s 00:28:04.713 user 0m46.178s 00:28:04.713 sys 0m4.959s 00:28:04.713 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:04.713 19:12:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:04.713 ************************************ 00:28:04.713 END TEST raid_rebuild_test_sb_4k 00:28:04.713 ************************************ 00:28:04.713 19:12:19 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:28:04.713 19:12:19 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:28:04.713 19:12:19 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:28:04.713 19:12:19 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:04.713 19:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 ************************************ 00:28:04.974 START TEST raid_state_function_test_sb_md_separate 00:28:04.974 ************************************ 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=1794710 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1794710' 00:28:04.974 Process raid pid: 1794710 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 1794710 /var/tmp/spdk-raid.sock 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@830 -- # '[' -z 1794710 ']' 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:04.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:04.974 19:12:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:04.974 [2024-06-10 19:12:19.560784] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:28:04.974 [2024-06-10 19:12:19.560841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.0 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.1 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.2 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.3 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.4 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.5 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.6 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:01.7 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.0 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.1 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.2 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.3 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.4 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.5 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.6 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b6:02.7 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.0 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.1 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.2 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.3 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.4 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.5 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.6 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:01.7 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.0 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.1 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.2 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.3 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.4 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.5 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.6 cannot be used 00:28:04.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:04.974 EAL: Requested device 0000:b8:02.7 cannot be used 00:28:04.974 [2024-06-10 19:12:19.694278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.234 [2024-06-10 19:12:19.782285] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.234 [2024-06-10 19:12:19.843481] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.234 [2024-06-10 19:12:19.843510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.802 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:05.802 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@863 -- # return 0 00:28:05.802 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:06.061 [2024-06-10 19:12:20.670448] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:06.061 [2024-06-10 19:12:20.670486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:06.061 [2024-06-10 19:12:20.670495] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:06.061 [2024-06-10 19:12:20.670506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.061 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.321 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.321 "name": "Existed_Raid", 00:28:06.321 "uuid": "0d507189-3842-4d09-acc5-204e165c1076", 00:28:06.321 "strip_size_kb": 0, 00:28:06.321 "state": "configuring", 00:28:06.321 "raid_level": "raid1", 00:28:06.321 "superblock": true, 00:28:06.321 "num_base_bdevs": 2, 00:28:06.321 "num_base_bdevs_discovered": 0, 00:28:06.321 "num_base_bdevs_operational": 2, 00:28:06.321 "base_bdevs_list": [ 00:28:06.321 { 00:28:06.321 "name": "BaseBdev1", 00:28:06.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.321 "is_configured": false, 00:28:06.321 "data_offset": 0, 00:28:06.321 "data_size": 0 00:28:06.321 }, 00:28:06.321 { 00:28:06.321 "name": "BaseBdev2", 00:28:06.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.321 "is_configured": false, 00:28:06.321 "data_offset": 0, 00:28:06.321 "data_size": 0 00:28:06.321 } 00:28:06.321 ] 00:28:06.321 }' 00:28:06.321 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.321 19:12:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:06.889 19:12:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:07.148 [2024-06-10 19:12:21.676964] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:07.148 [2024-06-10 19:12:21.676991] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bbff10 name Existed_Raid, state configuring 00:28:07.148 19:12:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:07.148 [2024-06-10 19:12:21.901565] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:07.148 [2024-06-10 19:12:21.901594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:07.149 [2024-06-10 19:12:21.901603] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:07.149 [2024-06-10 19:12:21.901613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:07.408 19:12:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:28:07.408 [2024-06-10 19:12:22.140279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.408 BaseBdev1 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local i 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:07.408 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:07.666 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:07.925 [ 00:28:07.925 { 00:28:07.925 "name": "BaseBdev1", 00:28:07.925 "aliases": [ 00:28:07.925 "248e38b0-0cb4-4398-b327-313a89c9f3db" 00:28:07.925 ], 00:28:07.925 "product_name": "Malloc disk", 00:28:07.925 "block_size": 4096, 00:28:07.925 "num_blocks": 8192, 00:28:07.925 "uuid": "248e38b0-0cb4-4398-b327-313a89c9f3db", 00:28:07.925 "md_size": 32, 00:28:07.925 "md_interleave": false, 00:28:07.925 "dif_type": 0, 00:28:07.925 "assigned_rate_limits": { 00:28:07.925 "rw_ios_per_sec": 0, 00:28:07.925 "rw_mbytes_per_sec": 0, 00:28:07.925 "r_mbytes_per_sec": 0, 00:28:07.925 "w_mbytes_per_sec": 0 00:28:07.925 }, 00:28:07.925 "claimed": true, 00:28:07.925 "claim_type": "exclusive_write", 00:28:07.925 "zoned": false, 00:28:07.925 "supported_io_types": { 00:28:07.925 "read": true, 00:28:07.925 "write": true, 00:28:07.925 "unmap": true, 00:28:07.925 "write_zeroes": true, 00:28:07.925 "flush": true, 00:28:07.925 "reset": true, 00:28:07.925 "compare": false, 00:28:07.925 "compare_and_write": false, 00:28:07.925 "abort": true, 00:28:07.925 "nvme_admin": false, 00:28:07.925 "nvme_io": false 00:28:07.925 }, 00:28:07.925 "memory_domains": [ 00:28:07.925 { 00:28:07.925 "dma_device_id": "system", 00:28:07.925 "dma_device_type": 1 00:28:07.925 }, 00:28:07.925 { 00:28:07.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.925 "dma_device_type": 2 00:28:07.925 } 00:28:07.925 ], 00:28:07.925 "driver_specific": {} 00:28:07.925 } 00:28:07.925 ] 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # return 0 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.925 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.184 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.184 "name": "Existed_Raid", 00:28:08.184 "uuid": "ae9ad095-f72e-47e3-a30a-a3b1afba8493", 00:28:08.184 "strip_size_kb": 0, 00:28:08.184 "state": "configuring", 00:28:08.184 "raid_level": "raid1", 00:28:08.184 "superblock": true, 00:28:08.184 "num_base_bdevs": 2, 00:28:08.184 "num_base_bdevs_discovered": 1, 00:28:08.184 "num_base_bdevs_operational": 2, 00:28:08.184 "base_bdevs_list": [ 00:28:08.184 { 00:28:08.184 "name": "BaseBdev1", 00:28:08.184 "uuid": "248e38b0-0cb4-4398-b327-313a89c9f3db", 00:28:08.184 "is_configured": true, 00:28:08.184 "data_offset": 256, 00:28:08.184 "data_size": 7936 00:28:08.184 }, 00:28:08.184 { 00:28:08.184 "name": "BaseBdev2", 00:28:08.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.184 "is_configured": false, 00:28:08.184 "data_offset": 0, 00:28:08.184 "data_size": 0 00:28:08.184 } 00:28:08.184 ] 00:28:08.184 }' 00:28:08.184 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.184 19:12:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:08.752 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:09.011 [2024-06-10 19:12:23.620210] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:09.011 [2024-06-10 19:12:23.620244] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bbf800 name Existed_Raid, state configuring 00:28:09.011 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:09.270 [2024-06-10 19:12:23.800714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:09.270 [2024-06-10 19:12:23.802100] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:09.270 [2024-06-10 19:12:23.802132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:09.270 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.271 19:12:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.530 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.530 "name": "Existed_Raid", 00:28:09.530 "uuid": "44049e9c-c840-4370-9f6c-8d1456ca8351", 00:28:09.530 "strip_size_kb": 0, 00:28:09.530 "state": "configuring", 00:28:09.530 "raid_level": "raid1", 00:28:09.530 "superblock": true, 00:28:09.530 "num_base_bdevs": 2, 00:28:09.530 "num_base_bdevs_discovered": 1, 00:28:09.530 "num_base_bdevs_operational": 2, 00:28:09.530 "base_bdevs_list": [ 00:28:09.530 { 00:28:09.530 "name": "BaseBdev1", 00:28:09.530 "uuid": "248e38b0-0cb4-4398-b327-313a89c9f3db", 00:28:09.530 "is_configured": true, 00:28:09.530 "data_offset": 256, 00:28:09.530 "data_size": 7936 00:28:09.530 }, 00:28:09.530 { 00:28:09.530 "name": "BaseBdev2", 00:28:09.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.530 "is_configured": false, 00:28:09.530 "data_offset": 0, 00:28:09.530 "data_size": 0 00:28:09.530 } 00:28:09.530 ] 00:28:09.530 }' 00:28:09.530 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.530 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:28:10.098 [2024-06-10 19:12:24.747085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.098 [2024-06-10 19:12:24.747206] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bbef40 00:28:10.098 [2024-06-10 19:12:24.747217] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:10.098 [2024-06-10 19:12:24.747272] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bbe980 00:28:10.098 [2024-06-10 19:12:24.747356] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bbef40 00:28:10.098 [2024-06-10 19:12:24.747365] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1bbef40 00:28:10.098 [2024-06-10 19:12:24.747424] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.098 BaseBdev2 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local i 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:28:10.098 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:10.357 19:12:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:10.617 [ 00:28:10.617 { 00:28:10.617 "name": "BaseBdev2", 00:28:10.617 "aliases": [ 00:28:10.617 "857573de-1c25-4e93-beb8-81268bc52bae" 00:28:10.617 ], 00:28:10.617 "product_name": "Malloc disk", 00:28:10.617 "block_size": 4096, 00:28:10.617 "num_blocks": 8192, 00:28:10.617 "uuid": "857573de-1c25-4e93-beb8-81268bc52bae", 00:28:10.617 "md_size": 32, 00:28:10.617 "md_interleave": false, 00:28:10.617 "dif_type": 0, 00:28:10.617 "assigned_rate_limits": { 00:28:10.617 "rw_ios_per_sec": 0, 00:28:10.617 "rw_mbytes_per_sec": 0, 00:28:10.617 "r_mbytes_per_sec": 0, 00:28:10.617 "w_mbytes_per_sec": 0 00:28:10.617 }, 00:28:10.617 "claimed": true, 00:28:10.617 "claim_type": "exclusive_write", 00:28:10.617 "zoned": false, 00:28:10.617 "supported_io_types": { 00:28:10.617 "read": true, 00:28:10.617 "write": true, 00:28:10.617 "unmap": true, 00:28:10.617 "write_zeroes": true, 00:28:10.617 "flush": true, 00:28:10.617 "reset": true, 00:28:10.617 "compare": false, 00:28:10.617 "compare_and_write": false, 00:28:10.617 "abort": true, 00:28:10.617 "nvme_admin": false, 00:28:10.617 "nvme_io": false 00:28:10.617 }, 00:28:10.617 "memory_domains": [ 00:28:10.617 { 00:28:10.617 "dma_device_id": "system", 00:28:10.617 "dma_device_type": 1 00:28:10.617 }, 00:28:10.617 { 00:28:10.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.617 "dma_device_type": 2 00:28:10.617 } 00:28:10.617 ], 00:28:10.617 "driver_specific": {} 00:28:10.617 } 00:28:10.617 ] 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # return 0 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.617 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.618 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.618 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.618 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.618 "name": "Existed_Raid", 00:28:10.618 "uuid": "44049e9c-c840-4370-9f6c-8d1456ca8351", 00:28:10.618 "strip_size_kb": 0, 00:28:10.618 "state": "online", 00:28:10.618 "raid_level": "raid1", 00:28:10.618 "superblock": true, 00:28:10.618 "num_base_bdevs": 2, 00:28:10.618 "num_base_bdevs_discovered": 2, 00:28:10.618 "num_base_bdevs_operational": 2, 00:28:10.618 "base_bdevs_list": [ 00:28:10.618 { 00:28:10.618 "name": "BaseBdev1", 00:28:10.618 "uuid": "248e38b0-0cb4-4398-b327-313a89c9f3db", 00:28:10.618 "is_configured": true, 00:28:10.618 "data_offset": 256, 00:28:10.618 "data_size": 7936 00:28:10.618 }, 00:28:10.618 { 00:28:10.618 "name": "BaseBdev2", 00:28:10.618 "uuid": "857573de-1c25-4e93-beb8-81268bc52bae", 00:28:10.618 "is_configured": true, 00:28:10.618 "data_offset": 256, 00:28:10.618 "data_size": 7936 00:28:10.618 } 00:28:10.618 ] 00:28:10.618 }' 00:28:10.618 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.618 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:11.187 19:12:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:11.446 [2024-06-10 19:12:26.122974] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.446 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:11.446 "name": "Existed_Raid", 00:28:11.446 "aliases": [ 00:28:11.446 "44049e9c-c840-4370-9f6c-8d1456ca8351" 00:28:11.446 ], 00:28:11.446 "product_name": "Raid Volume", 00:28:11.446 "block_size": 4096, 00:28:11.446 "num_blocks": 7936, 00:28:11.446 "uuid": "44049e9c-c840-4370-9f6c-8d1456ca8351", 00:28:11.446 "md_size": 32, 00:28:11.446 "md_interleave": false, 00:28:11.446 "dif_type": 0, 00:28:11.446 "assigned_rate_limits": { 00:28:11.446 "rw_ios_per_sec": 0, 00:28:11.446 "rw_mbytes_per_sec": 0, 00:28:11.446 "r_mbytes_per_sec": 0, 00:28:11.446 "w_mbytes_per_sec": 0 00:28:11.446 }, 00:28:11.446 "claimed": false, 00:28:11.446 "zoned": false, 00:28:11.446 "supported_io_types": { 00:28:11.446 "read": true, 00:28:11.446 "write": true, 00:28:11.446 "unmap": false, 00:28:11.446 "write_zeroes": true, 00:28:11.446 "flush": false, 00:28:11.446 "reset": true, 00:28:11.446 "compare": false, 00:28:11.446 "compare_and_write": false, 00:28:11.446 "abort": false, 00:28:11.446 "nvme_admin": false, 00:28:11.446 "nvme_io": false 00:28:11.446 }, 00:28:11.446 "memory_domains": [ 00:28:11.446 { 00:28:11.446 "dma_device_id": "system", 00:28:11.446 "dma_device_type": 1 00:28:11.446 }, 00:28:11.446 { 00:28:11.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.446 "dma_device_type": 2 00:28:11.446 }, 00:28:11.446 { 00:28:11.446 "dma_device_id": "system", 00:28:11.446 "dma_device_type": 1 00:28:11.446 }, 00:28:11.446 { 00:28:11.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.446 "dma_device_type": 2 00:28:11.446 } 00:28:11.446 ], 00:28:11.446 "driver_specific": { 00:28:11.446 "raid": { 00:28:11.446 "uuid": "44049e9c-c840-4370-9f6c-8d1456ca8351", 00:28:11.446 "strip_size_kb": 0, 00:28:11.446 "state": "online", 00:28:11.446 "raid_level": "raid1", 00:28:11.446 "superblock": true, 00:28:11.446 "num_base_bdevs": 2, 00:28:11.446 "num_base_bdevs_discovered": 2, 00:28:11.446 "num_base_bdevs_operational": 2, 00:28:11.446 "base_bdevs_list": [ 00:28:11.446 { 00:28:11.446 "name": "BaseBdev1", 00:28:11.446 "uuid": "248e38b0-0cb4-4398-b327-313a89c9f3db", 00:28:11.446 "is_configured": true, 00:28:11.446 "data_offset": 256, 00:28:11.446 "data_size": 7936 00:28:11.446 }, 00:28:11.446 { 00:28:11.446 "name": "BaseBdev2", 00:28:11.446 "uuid": "857573de-1c25-4e93-beb8-81268bc52bae", 00:28:11.446 "is_configured": true, 00:28:11.446 "data_offset": 256, 00:28:11.446 "data_size": 7936 00:28:11.446 } 00:28:11.446 ] 00:28:11.446 } 00:28:11.446 } 00:28:11.446 }' 00:28:11.446 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:11.446 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:11.446 BaseBdev2' 00:28:11.446 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:11.446 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:11.446 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:11.706 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:11.706 "name": "BaseBdev1", 00:28:11.706 "aliases": [ 00:28:11.706 "248e38b0-0cb4-4398-b327-313a89c9f3db" 00:28:11.706 ], 00:28:11.706 "product_name": "Malloc disk", 00:28:11.706 "block_size": 4096, 00:28:11.706 "num_blocks": 8192, 00:28:11.706 "uuid": "248e38b0-0cb4-4398-b327-313a89c9f3db", 00:28:11.706 "md_size": 32, 00:28:11.706 "md_interleave": false, 00:28:11.706 "dif_type": 0, 00:28:11.706 "assigned_rate_limits": { 00:28:11.706 "rw_ios_per_sec": 0, 00:28:11.706 "rw_mbytes_per_sec": 0, 00:28:11.706 "r_mbytes_per_sec": 0, 00:28:11.706 "w_mbytes_per_sec": 0 00:28:11.706 }, 00:28:11.706 "claimed": true, 00:28:11.706 "claim_type": "exclusive_write", 00:28:11.706 "zoned": false, 00:28:11.706 "supported_io_types": { 00:28:11.706 "read": true, 00:28:11.706 "write": true, 00:28:11.706 "unmap": true, 00:28:11.706 "write_zeroes": true, 00:28:11.706 "flush": true, 00:28:11.706 "reset": true, 00:28:11.706 "compare": false, 00:28:11.706 "compare_and_write": false, 00:28:11.706 "abort": true, 00:28:11.706 "nvme_admin": false, 00:28:11.706 "nvme_io": false 00:28:11.706 }, 00:28:11.706 "memory_domains": [ 00:28:11.706 { 00:28:11.706 "dma_device_id": "system", 00:28:11.706 "dma_device_type": 1 00:28:11.706 }, 00:28:11.706 { 00:28:11.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.706 "dma_device_type": 2 00:28:11.706 } 00:28:11.706 ], 00:28:11.706 "driver_specific": {} 00:28:11.706 }' 00:28:11.706 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.706 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.966 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.225 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:12.225 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.225 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:12.225 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:12.225 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:12.225 "name": "BaseBdev2", 00:28:12.225 "aliases": [ 00:28:12.225 "857573de-1c25-4e93-beb8-81268bc52bae" 00:28:12.225 ], 00:28:12.225 "product_name": "Malloc disk", 00:28:12.225 "block_size": 4096, 00:28:12.225 "num_blocks": 8192, 00:28:12.225 "uuid": "857573de-1c25-4e93-beb8-81268bc52bae", 00:28:12.225 "md_size": 32, 00:28:12.225 "md_interleave": false, 00:28:12.225 "dif_type": 0, 00:28:12.225 "assigned_rate_limits": { 00:28:12.225 "rw_ios_per_sec": 0, 00:28:12.225 "rw_mbytes_per_sec": 0, 00:28:12.225 "r_mbytes_per_sec": 0, 00:28:12.225 "w_mbytes_per_sec": 0 00:28:12.225 }, 00:28:12.225 "claimed": true, 00:28:12.225 "claim_type": "exclusive_write", 00:28:12.225 "zoned": false, 00:28:12.225 "supported_io_types": { 00:28:12.225 "read": true, 00:28:12.225 "write": true, 00:28:12.225 "unmap": true, 00:28:12.225 "write_zeroes": true, 00:28:12.225 "flush": true, 00:28:12.225 "reset": true, 00:28:12.225 "compare": false, 00:28:12.225 "compare_and_write": false, 00:28:12.225 "abort": true, 00:28:12.225 "nvme_admin": false, 00:28:12.225 "nvme_io": false 00:28:12.225 }, 00:28:12.225 "memory_domains": [ 00:28:12.225 { 00:28:12.225 "dma_device_id": "system", 00:28:12.225 "dma_device_type": 1 00:28:12.225 }, 00:28:12.225 { 00:28:12.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.225 "dma_device_type": 2 00:28:12.225 } 00:28:12.225 ], 00:28:12.225 "driver_specific": {} 00:28:12.225 }' 00:28:12.225 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.485 19:12:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:12.485 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.744 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.744 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:12.744 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:12.744 [2024-06-10 19:12:27.498417] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.004 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.263 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.263 "name": "Existed_Raid", 00:28:13.263 "uuid": "44049e9c-c840-4370-9f6c-8d1456ca8351", 00:28:13.263 "strip_size_kb": 0, 00:28:13.263 "state": "online", 00:28:13.263 "raid_level": "raid1", 00:28:13.263 "superblock": true, 00:28:13.263 "num_base_bdevs": 2, 00:28:13.263 "num_base_bdevs_discovered": 1, 00:28:13.263 "num_base_bdevs_operational": 1, 00:28:13.263 "base_bdevs_list": [ 00:28:13.263 { 00:28:13.263 "name": null, 00:28:13.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.263 "is_configured": false, 00:28:13.263 "data_offset": 256, 00:28:13.263 "data_size": 7936 00:28:13.263 }, 00:28:13.263 { 00:28:13.263 "name": "BaseBdev2", 00:28:13.263 "uuid": "857573de-1c25-4e93-beb8-81268bc52bae", 00:28:13.263 "is_configured": true, 00:28:13.263 "data_offset": 256, 00:28:13.263 "data_size": 7936 00:28:13.263 } 00:28:13.263 ] 00:28:13.263 }' 00:28:13.263 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.263 19:12:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:13.831 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:14.090 [2024-06-10 19:12:28.764006] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:14.090 [2024-06-10 19:12:28.764082] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.090 [2024-06-10 19:12:28.775117] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.090 [2024-06-10 19:12:28.775149] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.090 [2024-06-10 19:12:28.775160] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bbef40 name Existed_Raid, state offline 00:28:14.090 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:14.090 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:14.090 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.090 19:12:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 1794710 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@949 -- # '[' -z 1794710 ']' 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # kill -0 1794710 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # uname 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1794710 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1794710' 00:28:14.349 killing process with pid 1794710 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # kill 1794710 00:28:14.349 [2024-06-10 19:12:29.080479] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:14.349 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # wait 1794710 00:28:14.349 [2024-06-10 19:12:29.081329] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:14.608 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:28:14.608 00:28:14.608 real 0m9.775s 00:28:14.608 user 0m17.246s 00:28:14.608 sys 0m1.962s 00:28:14.608 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:14.608 19:12:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:14.608 ************************************ 00:28:14.608 END TEST raid_state_function_test_sb_md_separate 00:28:14.608 ************************************ 00:28:14.608 19:12:29 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:28:14.608 19:12:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:28:14.608 19:12:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:14.608 19:12:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:14.608 ************************************ 00:28:14.608 START TEST raid_superblock_test_md_separate 00:28:14.608 ************************************ 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=1796541 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 1796541 /var/tmp/spdk-raid.sock 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@830 -- # '[' -z 1796541 ']' 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:14.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:14.608 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:14.867 19:12:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:14.867 [2024-06-10 19:12:29.418188] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:28:14.868 [2024-06-10 19:12:29.418244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796541 ] 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.0 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.1 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.2 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.3 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.4 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.5 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.6 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:01.7 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.0 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.1 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.2 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.3 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.4 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.5 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.6 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b6:02.7 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.0 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.1 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.2 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.3 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.4 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.5 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.6 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:01.7 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.0 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.1 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.2 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.3 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.4 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.5 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.6 cannot be used 00:28:14.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:14.868 EAL: Requested device 0000:b8:02.7 cannot be used 00:28:14.868 [2024-06-10 19:12:29.552133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.127 [2024-06-10 19:12:29.639170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.127 [2024-06-10 19:12:29.693493] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:15.127 [2024-06-10 19:12:29.693520] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@863 -- # return 0 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:15.696 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:28:15.955 malloc1 00:28:15.955 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:16.215 [2024-06-10 19:12:30.745334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:16.215 [2024-06-10 19:12:30.745373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.215 [2024-06-10 19:12:30.745390] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189a030 00:28:16.215 [2024-06-10 19:12:30.745401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.215 [2024-06-10 19:12:30.746747] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.215 [2024-06-10 19:12:30.746773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:16.215 pt1 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:16.215 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:28:16.474 malloc2 00:28:16.474 19:12:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:16.474 [2024-06-10 19:12:31.207717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:16.474 [2024-06-10 19:12:31.207757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.474 [2024-06-10 19:12:31.207774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x188a8a0 00:28:16.474 [2024-06-10 19:12:31.207790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.474 [2024-06-10 19:12:31.209007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.474 [2024-06-10 19:12:31.209033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:16.474 pt2 00:28:16.474 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:16.474 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:16.474 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:16.733 [2024-06-10 19:12:31.432318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:16.733 [2024-06-10 19:12:31.433445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:16.733 [2024-06-10 19:12:31.433570] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x188be60 00:28:16.733 [2024-06-10 19:12:31.433592] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:16.733 [2024-06-10 19:12:31.433653] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16fc790 00:28:16.733 [2024-06-10 19:12:31.433754] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x188be60 00:28:16.733 [2024-06-10 19:12:31.433764] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x188be60 00:28:16.734 [2024-06-10 19:12:31.433825] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.734 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.993 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.993 "name": "raid_bdev1", 00:28:16.993 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:16.993 "strip_size_kb": 0, 00:28:16.993 "state": "online", 00:28:16.993 "raid_level": "raid1", 00:28:16.993 "superblock": true, 00:28:16.993 "num_base_bdevs": 2, 00:28:16.993 "num_base_bdevs_discovered": 2, 00:28:16.993 "num_base_bdevs_operational": 2, 00:28:16.993 "base_bdevs_list": [ 00:28:16.993 { 00:28:16.993 "name": "pt1", 00:28:16.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:16.993 "is_configured": true, 00:28:16.993 "data_offset": 256, 00:28:16.993 "data_size": 7936 00:28:16.993 }, 00:28:16.993 { 00:28:16.993 "name": "pt2", 00:28:16.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.993 "is_configured": true, 00:28:16.993 "data_offset": 256, 00:28:16.993 "data_size": 7936 00:28:16.993 } 00:28:16.993 ] 00:28:16.993 }' 00:28:16.993 19:12:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.993 19:12:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:17.561 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:17.820 [2024-06-10 19:12:32.467226] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.820 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:17.820 "name": "raid_bdev1", 00:28:17.820 "aliases": [ 00:28:17.820 "82001db3-7feb-46c7-9172-036896c04eed" 00:28:17.820 ], 00:28:17.820 "product_name": "Raid Volume", 00:28:17.820 "block_size": 4096, 00:28:17.820 "num_blocks": 7936, 00:28:17.820 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:17.820 "md_size": 32, 00:28:17.820 "md_interleave": false, 00:28:17.820 "dif_type": 0, 00:28:17.820 "assigned_rate_limits": { 00:28:17.820 "rw_ios_per_sec": 0, 00:28:17.820 "rw_mbytes_per_sec": 0, 00:28:17.820 "r_mbytes_per_sec": 0, 00:28:17.820 "w_mbytes_per_sec": 0 00:28:17.820 }, 00:28:17.820 "claimed": false, 00:28:17.820 "zoned": false, 00:28:17.820 "supported_io_types": { 00:28:17.820 "read": true, 00:28:17.820 "write": true, 00:28:17.820 "unmap": false, 00:28:17.820 "write_zeroes": true, 00:28:17.820 "flush": false, 00:28:17.820 "reset": true, 00:28:17.820 "compare": false, 00:28:17.820 "compare_and_write": false, 00:28:17.820 "abort": false, 00:28:17.820 "nvme_admin": false, 00:28:17.820 "nvme_io": false 00:28:17.820 }, 00:28:17.820 "memory_domains": [ 00:28:17.820 { 00:28:17.820 "dma_device_id": "system", 00:28:17.820 "dma_device_type": 1 00:28:17.820 }, 00:28:17.820 { 00:28:17.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.820 "dma_device_type": 2 00:28:17.820 }, 00:28:17.820 { 00:28:17.820 "dma_device_id": "system", 00:28:17.820 "dma_device_type": 1 00:28:17.820 }, 00:28:17.820 { 00:28:17.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.820 "dma_device_type": 2 00:28:17.820 } 00:28:17.820 ], 00:28:17.820 "driver_specific": { 00:28:17.820 "raid": { 00:28:17.820 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:17.820 "strip_size_kb": 0, 00:28:17.820 "state": "online", 00:28:17.820 "raid_level": "raid1", 00:28:17.820 "superblock": true, 00:28:17.820 "num_base_bdevs": 2, 00:28:17.820 "num_base_bdevs_discovered": 2, 00:28:17.820 "num_base_bdevs_operational": 2, 00:28:17.820 "base_bdevs_list": [ 00:28:17.820 { 00:28:17.820 "name": "pt1", 00:28:17.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:17.820 "is_configured": true, 00:28:17.820 "data_offset": 256, 00:28:17.820 "data_size": 7936 00:28:17.820 }, 00:28:17.820 { 00:28:17.820 "name": "pt2", 00:28:17.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:17.820 "is_configured": true, 00:28:17.820 "data_offset": 256, 00:28:17.820 "data_size": 7936 00:28:17.820 } 00:28:17.820 ] 00:28:17.820 } 00:28:17.820 } 00:28:17.820 }' 00:28:17.820 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:17.820 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:17.820 pt2' 00:28:17.820 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:17.820 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:17.820 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:18.079 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:18.079 "name": "pt1", 00:28:18.079 "aliases": [ 00:28:18.079 "00000000-0000-0000-0000-000000000001" 00:28:18.079 ], 00:28:18.079 "product_name": "passthru", 00:28:18.079 "block_size": 4096, 00:28:18.079 "num_blocks": 8192, 00:28:18.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:18.079 "md_size": 32, 00:28:18.079 "md_interleave": false, 00:28:18.079 "dif_type": 0, 00:28:18.079 "assigned_rate_limits": { 00:28:18.079 "rw_ios_per_sec": 0, 00:28:18.079 "rw_mbytes_per_sec": 0, 00:28:18.079 "r_mbytes_per_sec": 0, 00:28:18.079 "w_mbytes_per_sec": 0 00:28:18.079 }, 00:28:18.079 "claimed": true, 00:28:18.079 "claim_type": "exclusive_write", 00:28:18.079 "zoned": false, 00:28:18.079 "supported_io_types": { 00:28:18.079 "read": true, 00:28:18.079 "write": true, 00:28:18.079 "unmap": true, 00:28:18.079 "write_zeroes": true, 00:28:18.079 "flush": true, 00:28:18.079 "reset": true, 00:28:18.079 "compare": false, 00:28:18.079 "compare_and_write": false, 00:28:18.079 "abort": true, 00:28:18.079 "nvme_admin": false, 00:28:18.079 "nvme_io": false 00:28:18.079 }, 00:28:18.079 "memory_domains": [ 00:28:18.079 { 00:28:18.079 "dma_device_id": "system", 00:28:18.079 "dma_device_type": 1 00:28:18.079 }, 00:28:18.079 { 00:28:18.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.079 "dma_device_type": 2 00:28:18.079 } 00:28:18.079 ], 00:28:18.079 "driver_specific": { 00:28:18.079 "passthru": { 00:28:18.079 "name": "pt1", 00:28:18.079 "base_bdev_name": "malloc1" 00:28:18.079 } 00:28:18.079 } 00:28:18.079 }' 00:28:18.079 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.079 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:18.338 19:12:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:18.338 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:18.338 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:18.338 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:18.338 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:18.338 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:18.597 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:18.597 "name": "pt2", 00:28:18.597 "aliases": [ 00:28:18.597 "00000000-0000-0000-0000-000000000002" 00:28:18.597 ], 00:28:18.597 "product_name": "passthru", 00:28:18.597 "block_size": 4096, 00:28:18.597 "num_blocks": 8192, 00:28:18.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:18.598 "md_size": 32, 00:28:18.598 "md_interleave": false, 00:28:18.598 "dif_type": 0, 00:28:18.598 "assigned_rate_limits": { 00:28:18.598 "rw_ios_per_sec": 0, 00:28:18.598 "rw_mbytes_per_sec": 0, 00:28:18.598 "r_mbytes_per_sec": 0, 00:28:18.598 "w_mbytes_per_sec": 0 00:28:18.598 }, 00:28:18.598 "claimed": true, 00:28:18.598 "claim_type": "exclusive_write", 00:28:18.598 "zoned": false, 00:28:18.598 "supported_io_types": { 00:28:18.598 "read": true, 00:28:18.598 "write": true, 00:28:18.598 "unmap": true, 00:28:18.598 "write_zeroes": true, 00:28:18.598 "flush": true, 00:28:18.598 "reset": true, 00:28:18.598 "compare": false, 00:28:18.598 "compare_and_write": false, 00:28:18.598 "abort": true, 00:28:18.598 "nvme_admin": false, 00:28:18.598 "nvme_io": false 00:28:18.598 }, 00:28:18.598 "memory_domains": [ 00:28:18.598 { 00:28:18.598 "dma_device_id": "system", 00:28:18.598 "dma_device_type": 1 00:28:18.598 }, 00:28:18.598 { 00:28:18.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.598 "dma_device_type": 2 00:28:18.598 } 00:28:18.598 ], 00:28:18.598 "driver_specific": { 00:28:18.598 "passthru": { 00:28:18.598 "name": "pt2", 00:28:18.598 "base_bdev_name": "malloc2" 00:28:18.598 } 00:28:18.598 } 00:28:18.598 }' 00:28:18.598 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.598 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:18.856 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:19.116 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:19.116 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:19.116 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:19.116 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:28:19.116 [2024-06-10 19:12:33.858869] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:19.375 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=82001db3-7feb-46c7-9172-036896c04eed 00:28:19.375 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 82001db3-7feb-46c7-9172-036896c04eed ']' 00:28:19.375 19:12:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:19.375 [2024-06-10 19:12:34.091284] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:19.375 [2024-06-10 19:12:34.091301] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:19.375 [2024-06-10 19:12:34.091346] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:19.375 [2024-06-10 19:12:34.091390] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:19.375 [2024-06-10 19:12:34.091401] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x188be60 name raid_bdev1, state offline 00:28:19.375 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.375 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:28:19.634 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:28:19.634 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:28:19.634 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:28:19.634 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:19.936 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:28:19.936 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:20.229 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:20.229 19:12:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # local es=0 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:20.488 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:20.488 [2024-06-10 19:12:35.230245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:20.488 [2024-06-10 19:12:35.231491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:20.488 [2024-06-10 19:12:35.231540] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:20.488 [2024-06-10 19:12:35.231586] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:20.488 [2024-06-10 19:12:35.231604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:20.488 [2024-06-10 19:12:35.231613] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x187fa30 name raid_bdev1, state configuring 00:28:20.488 request: 00:28:20.488 { 00:28:20.488 "name": "raid_bdev1", 00:28:20.488 "raid_level": "raid1", 00:28:20.488 "base_bdevs": [ 00:28:20.488 "malloc1", 00:28:20.488 "malloc2" 00:28:20.488 ], 00:28:20.488 "superblock": false, 00:28:20.488 "method": "bdev_raid_create", 00:28:20.488 "req_id": 1 00:28:20.488 } 00:28:20.488 Got JSON-RPC error response 00:28:20.488 response: 00:28:20.488 { 00:28:20.488 "code": -17, 00:28:20.488 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:20.488 } 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # es=1 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:28:20.748 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:21.009 [2024-06-10 19:12:35.667345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:21.009 [2024-06-10 19:12:35.667383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.009 [2024-06-10 19:12:35.667398] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x187fe70 00:28:21.009 [2024-06-10 19:12:35.667409] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.009 [2024-06-10 19:12:35.668701] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.009 [2024-06-10 19:12:35.668737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:21.009 [2024-06-10 19:12:35.668778] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:21.009 [2024-06-10 19:12:35.668801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:21.009 pt1 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.009 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.268 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.268 "name": "raid_bdev1", 00:28:21.268 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:21.268 "strip_size_kb": 0, 00:28:21.268 "state": "configuring", 00:28:21.268 "raid_level": "raid1", 00:28:21.268 "superblock": true, 00:28:21.268 "num_base_bdevs": 2, 00:28:21.268 "num_base_bdevs_discovered": 1, 00:28:21.268 "num_base_bdevs_operational": 2, 00:28:21.268 "base_bdevs_list": [ 00:28:21.268 { 00:28:21.268 "name": "pt1", 00:28:21.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:21.268 "is_configured": true, 00:28:21.268 "data_offset": 256, 00:28:21.268 "data_size": 7936 00:28:21.268 }, 00:28:21.268 { 00:28:21.268 "name": null, 00:28:21.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:21.268 "is_configured": false, 00:28:21.268 "data_offset": 256, 00:28:21.268 "data_size": 7936 00:28:21.268 } 00:28:21.268 ] 00:28:21.268 }' 00:28:21.268 19:12:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.268 19:12:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:21.836 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:28:21.836 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:28:21.836 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:28:21.836 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:22.095 [2024-06-10 19:12:36.694060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:22.095 [2024-06-10 19:12:36.694101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.095 [2024-06-10 19:12:36.694117] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16fc960 00:28:22.095 [2024-06-10 19:12:36.694128] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.095 [2024-06-10 19:12:36.694288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.095 [2024-06-10 19:12:36.694303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:22.095 [2024-06-10 19:12:36.694341] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:22.095 [2024-06-10 19:12:36.694357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:22.095 [2024-06-10 19:12:36.694444] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1880b50 00:28:22.095 [2024-06-10 19:12:36.694455] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:22.095 [2024-06-10 19:12:36.694504] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1881f60 00:28:22.095 [2024-06-10 19:12:36.694604] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1880b50 00:28:22.095 [2024-06-10 19:12:36.694614] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1880b50 00:28:22.095 [2024-06-10 19:12:36.694677] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.095 pt2 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.095 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.355 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.355 "name": "raid_bdev1", 00:28:22.355 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:22.355 "strip_size_kb": 0, 00:28:22.355 "state": "online", 00:28:22.355 "raid_level": "raid1", 00:28:22.355 "superblock": true, 00:28:22.355 "num_base_bdevs": 2, 00:28:22.355 "num_base_bdevs_discovered": 2, 00:28:22.355 "num_base_bdevs_operational": 2, 00:28:22.355 "base_bdevs_list": [ 00:28:22.355 { 00:28:22.355 "name": "pt1", 00:28:22.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:22.355 "is_configured": true, 00:28:22.355 "data_offset": 256, 00:28:22.355 "data_size": 7936 00:28:22.355 }, 00:28:22.355 { 00:28:22.355 "name": "pt2", 00:28:22.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:22.355 "is_configured": true, 00:28:22.355 "data_offset": 256, 00:28:22.355 "data_size": 7936 00:28:22.355 } 00:28:22.355 ] 00:28:22.355 }' 00:28:22.355 19:12:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.355 19:12:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:22.923 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:23.182 [2024-06-10 19:12:37.692895] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:23.182 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:23.182 "name": "raid_bdev1", 00:28:23.182 "aliases": [ 00:28:23.182 "82001db3-7feb-46c7-9172-036896c04eed" 00:28:23.182 ], 00:28:23.182 "product_name": "Raid Volume", 00:28:23.182 "block_size": 4096, 00:28:23.182 "num_blocks": 7936, 00:28:23.182 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:23.182 "md_size": 32, 00:28:23.182 "md_interleave": false, 00:28:23.182 "dif_type": 0, 00:28:23.182 "assigned_rate_limits": { 00:28:23.182 "rw_ios_per_sec": 0, 00:28:23.182 "rw_mbytes_per_sec": 0, 00:28:23.182 "r_mbytes_per_sec": 0, 00:28:23.182 "w_mbytes_per_sec": 0 00:28:23.182 }, 00:28:23.182 "claimed": false, 00:28:23.182 "zoned": false, 00:28:23.182 "supported_io_types": { 00:28:23.182 "read": true, 00:28:23.182 "write": true, 00:28:23.182 "unmap": false, 00:28:23.182 "write_zeroes": true, 00:28:23.182 "flush": false, 00:28:23.182 "reset": true, 00:28:23.182 "compare": false, 00:28:23.182 "compare_and_write": false, 00:28:23.182 "abort": false, 00:28:23.182 "nvme_admin": false, 00:28:23.182 "nvme_io": false 00:28:23.182 }, 00:28:23.182 "memory_domains": [ 00:28:23.182 { 00:28:23.182 "dma_device_id": "system", 00:28:23.182 "dma_device_type": 1 00:28:23.182 }, 00:28:23.182 { 00:28:23.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.182 "dma_device_type": 2 00:28:23.182 }, 00:28:23.182 { 00:28:23.182 "dma_device_id": "system", 00:28:23.182 "dma_device_type": 1 00:28:23.182 }, 00:28:23.182 { 00:28:23.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.182 "dma_device_type": 2 00:28:23.182 } 00:28:23.182 ], 00:28:23.182 "driver_specific": { 00:28:23.182 "raid": { 00:28:23.182 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:23.182 "strip_size_kb": 0, 00:28:23.182 "state": "online", 00:28:23.182 "raid_level": "raid1", 00:28:23.182 "superblock": true, 00:28:23.182 "num_base_bdevs": 2, 00:28:23.182 "num_base_bdevs_discovered": 2, 00:28:23.182 "num_base_bdevs_operational": 2, 00:28:23.182 "base_bdevs_list": [ 00:28:23.182 { 00:28:23.182 "name": "pt1", 00:28:23.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:23.182 "is_configured": true, 00:28:23.182 "data_offset": 256, 00:28:23.182 "data_size": 7936 00:28:23.182 }, 00:28:23.182 { 00:28:23.182 "name": "pt2", 00:28:23.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:23.182 "is_configured": true, 00:28:23.182 "data_offset": 256, 00:28:23.182 "data_size": 7936 00:28:23.182 } 00:28:23.182 ] 00:28:23.182 } 00:28:23.182 } 00:28:23.182 }' 00:28:23.182 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:23.182 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:23.182 pt2' 00:28:23.182 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:23.182 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:23.182 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:23.442 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:23.442 "name": "pt1", 00:28:23.442 "aliases": [ 00:28:23.442 "00000000-0000-0000-0000-000000000001" 00:28:23.442 ], 00:28:23.442 "product_name": "passthru", 00:28:23.442 "block_size": 4096, 00:28:23.442 "num_blocks": 8192, 00:28:23.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:23.442 "md_size": 32, 00:28:23.442 "md_interleave": false, 00:28:23.442 "dif_type": 0, 00:28:23.442 "assigned_rate_limits": { 00:28:23.442 "rw_ios_per_sec": 0, 00:28:23.442 "rw_mbytes_per_sec": 0, 00:28:23.442 "r_mbytes_per_sec": 0, 00:28:23.442 "w_mbytes_per_sec": 0 00:28:23.442 }, 00:28:23.442 "claimed": true, 00:28:23.442 "claim_type": "exclusive_write", 00:28:23.442 "zoned": false, 00:28:23.442 "supported_io_types": { 00:28:23.442 "read": true, 00:28:23.442 "write": true, 00:28:23.442 "unmap": true, 00:28:23.442 "write_zeroes": true, 00:28:23.442 "flush": true, 00:28:23.442 "reset": true, 00:28:23.442 "compare": false, 00:28:23.442 "compare_and_write": false, 00:28:23.442 "abort": true, 00:28:23.442 "nvme_admin": false, 00:28:23.442 "nvme_io": false 00:28:23.442 }, 00:28:23.442 "memory_domains": [ 00:28:23.442 { 00:28:23.442 "dma_device_id": "system", 00:28:23.442 "dma_device_type": 1 00:28:23.442 }, 00:28:23.442 { 00:28:23.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.442 "dma_device_type": 2 00:28:23.442 } 00:28:23.442 ], 00:28:23.442 "driver_specific": { 00:28:23.442 "passthru": { 00:28:23.442 "name": "pt1", 00:28:23.442 "base_bdev_name": "malloc1" 00:28:23.442 } 00:28:23.442 } 00:28:23.442 }' 00:28:23.442 19:12:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.442 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.442 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:23.442 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.442 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.442 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:23.442 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:23.701 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:23.960 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:23.960 "name": "pt2", 00:28:23.960 "aliases": [ 00:28:23.960 "00000000-0000-0000-0000-000000000002" 00:28:23.960 ], 00:28:23.960 "product_name": "passthru", 00:28:23.960 "block_size": 4096, 00:28:23.960 "num_blocks": 8192, 00:28:23.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:23.960 "md_size": 32, 00:28:23.960 "md_interleave": false, 00:28:23.960 "dif_type": 0, 00:28:23.960 "assigned_rate_limits": { 00:28:23.960 "rw_ios_per_sec": 0, 00:28:23.960 "rw_mbytes_per_sec": 0, 00:28:23.960 "r_mbytes_per_sec": 0, 00:28:23.960 "w_mbytes_per_sec": 0 00:28:23.960 }, 00:28:23.960 "claimed": true, 00:28:23.960 "claim_type": "exclusive_write", 00:28:23.960 "zoned": false, 00:28:23.960 "supported_io_types": { 00:28:23.960 "read": true, 00:28:23.960 "write": true, 00:28:23.960 "unmap": true, 00:28:23.960 "write_zeroes": true, 00:28:23.960 "flush": true, 00:28:23.960 "reset": true, 00:28:23.960 "compare": false, 00:28:23.960 "compare_and_write": false, 00:28:23.960 "abort": true, 00:28:23.960 "nvme_admin": false, 00:28:23.960 "nvme_io": false 00:28:23.960 }, 00:28:23.960 "memory_domains": [ 00:28:23.960 { 00:28:23.960 "dma_device_id": "system", 00:28:23.960 "dma_device_type": 1 00:28:23.960 }, 00:28:23.960 { 00:28:23.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.960 "dma_device_type": 2 00:28:23.960 } 00:28:23.960 ], 00:28:23.960 "driver_specific": { 00:28:23.960 "passthru": { 00:28:23.960 "name": "pt2", 00:28:23.960 "base_bdev_name": "malloc2" 00:28:23.960 } 00:28:23.960 } 00:28:23.960 }' 00:28:23.960 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.960 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:23.961 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:23.961 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:23.961 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:24.220 19:12:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:28:24.479 [2024-06-10 19:12:39.116635] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:24.479 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 82001db3-7feb-46c7-9172-036896c04eed '!=' 82001db3-7feb-46c7-9172-036896c04eed ']' 00:28:24.479 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:28:24.479 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:24.479 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:28:24.479 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:24.738 [2024-06-10 19:12:39.349067] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.738 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.739 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.739 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.739 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.739 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.998 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.998 "name": "raid_bdev1", 00:28:24.998 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:24.998 "strip_size_kb": 0, 00:28:24.998 "state": "online", 00:28:24.998 "raid_level": "raid1", 00:28:24.998 "superblock": true, 00:28:24.998 "num_base_bdevs": 2, 00:28:24.998 "num_base_bdevs_discovered": 1, 00:28:24.998 "num_base_bdevs_operational": 1, 00:28:24.998 "base_bdevs_list": [ 00:28:24.998 { 00:28:24.998 "name": null, 00:28:24.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.998 "is_configured": false, 00:28:24.998 "data_offset": 256, 00:28:24.998 "data_size": 7936 00:28:24.998 }, 00:28:24.998 { 00:28:24.998 "name": "pt2", 00:28:24.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:24.998 "is_configured": true, 00:28:24.998 "data_offset": 256, 00:28:24.998 "data_size": 7936 00:28:24.998 } 00:28:24.998 ] 00:28:24.998 }' 00:28:24.998 19:12:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.998 19:12:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:25.566 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:25.826 [2024-06-10 19:12:40.391794] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:25.826 [2024-06-10 19:12:40.391817] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:25.826 [2024-06-10 19:12:40.391862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.826 [2024-06-10 19:12:40.391899] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.826 [2024-06-10 19:12:40.391910] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1880b50 name raid_bdev1, state offline 00:28:25.826 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.826 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:28:26.085 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:28:26.085 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:28:26.085 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:28:26.085 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:26.085 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:26.345 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:28:26.345 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:28:26.345 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:28:26.345 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:28:26.345 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:28:26.345 19:12:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:26.345 [2024-06-10 19:12:41.069537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:26.345 [2024-06-10 19:12:41.069582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.345 [2024-06-10 19:12:41.069599] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189a6e0 00:28:26.345 [2024-06-10 19:12:41.069611] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.345 [2024-06-10 19:12:41.070954] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.345 [2024-06-10 19:12:41.070981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:26.345 [2024-06-10 19:12:41.071022] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:26.345 [2024-06-10 19:12:41.071045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:26.345 [2024-06-10 19:12:41.071114] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1881210 00:28:26.345 [2024-06-10 19:12:41.071123] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:26.345 [2024-06-10 19:12:41.071177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16fce80 00:28:26.345 [2024-06-10 19:12:41.071264] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1881210 00:28:26.345 [2024-06-10 19:12:41.071273] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1881210 00:28:26.345 [2024-06-10 19:12:41.071334] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.345 pt2 00:28:26.345 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:26.345 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:26.345 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.345 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:26.345 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:26.345 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:26.346 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.346 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.346 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.346 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.346 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.346 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.605 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.605 "name": "raid_bdev1", 00:28:26.605 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:26.605 "strip_size_kb": 0, 00:28:26.605 "state": "online", 00:28:26.605 "raid_level": "raid1", 00:28:26.605 "superblock": true, 00:28:26.605 "num_base_bdevs": 2, 00:28:26.605 "num_base_bdevs_discovered": 1, 00:28:26.605 "num_base_bdevs_operational": 1, 00:28:26.605 "base_bdevs_list": [ 00:28:26.605 { 00:28:26.605 "name": null, 00:28:26.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.605 "is_configured": false, 00:28:26.605 "data_offset": 256, 00:28:26.605 "data_size": 7936 00:28:26.605 }, 00:28:26.605 { 00:28:26.605 "name": "pt2", 00:28:26.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.605 "is_configured": true, 00:28:26.605 "data_offset": 256, 00:28:26.605 "data_size": 7936 00:28:26.605 } 00:28:26.605 ] 00:28:26.605 }' 00:28:26.605 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.605 19:12:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:27.185 19:12:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:27.445 [2024-06-10 19:12:42.072170] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:27.445 [2024-06-10 19:12:42.072192] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:27.445 [2024-06-10 19:12:42.072236] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:27.445 [2024-06-10 19:12:42.072273] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:27.445 [2024-06-10 19:12:42.072283] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1881210 name raid_bdev1, state offline 00:28:27.445 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.445 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:28:27.705 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:28:27.705 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:28:27.705 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:28:27.705 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:27.964 [2024-06-10 19:12:42.529361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:27.964 [2024-06-10 19:12:42.529403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.964 [2024-06-10 19:12:42.529419] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16fe0b0 00:28:27.964 [2024-06-10 19:12:42.529431] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.964 [2024-06-10 19:12:42.530775] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.964 [2024-06-10 19:12:42.530802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:27.964 [2024-06-10 19:12:42.530842] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:27.964 [2024-06-10 19:12:42.530863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:27.964 [2024-06-10 19:12:42.530943] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:27.964 [2024-06-10 19:12:42.530955] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:27.964 [2024-06-10 19:12:42.530967] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18824b0 name raid_bdev1, state configuring 00:28:27.964 [2024-06-10 19:12:42.530994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:27.964 [2024-06-10 19:12:42.531040] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18824b0 00:28:27.964 [2024-06-10 19:12:42.531049] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:27.964 [2024-06-10 19:12:42.531098] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1881af0 00:28:27.964 [2024-06-10 19:12:42.531183] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18824b0 00:28:27.964 [2024-06-10 19:12:42.531192] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18824b0 00:28:27.964 [2024-06-10 19:12:42.531257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:27.964 pt1 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.964 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.224 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.224 "name": "raid_bdev1", 00:28:28.224 "uuid": "82001db3-7feb-46c7-9172-036896c04eed", 00:28:28.224 "strip_size_kb": 0, 00:28:28.224 "state": "online", 00:28:28.224 "raid_level": "raid1", 00:28:28.224 "superblock": true, 00:28:28.224 "num_base_bdevs": 2, 00:28:28.224 "num_base_bdevs_discovered": 1, 00:28:28.224 "num_base_bdevs_operational": 1, 00:28:28.224 "base_bdevs_list": [ 00:28:28.224 { 00:28:28.224 "name": null, 00:28:28.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.224 "is_configured": false, 00:28:28.224 "data_offset": 256, 00:28:28.224 "data_size": 7936 00:28:28.224 }, 00:28:28.224 { 00:28:28.224 "name": "pt2", 00:28:28.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:28.224 "is_configured": true, 00:28:28.224 "data_offset": 256, 00:28:28.224 "data_size": 7936 00:28:28.224 } 00:28:28.224 ] 00:28:28.224 }' 00:28:28.224 19:12:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.224 19:12:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:28.793 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:28.793 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:29.052 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:28:29.052 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:29.052 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:28:29.312 [2024-06-10 19:12:43.812917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 82001db3-7feb-46c7-9172-036896c04eed '!=' 82001db3-7feb-46c7-9172-036896c04eed ']' 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 1796541 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@949 -- # '[' -z 1796541 ']' 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # kill -0 1796541 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # uname 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1796541 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1796541' 00:28:29.312 killing process with pid 1796541 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # kill 1796541 00:28:29.312 [2024-06-10 19:12:43.891372] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:29.312 [2024-06-10 19:12:43.891416] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:29.312 [2024-06-10 19:12:43.891451] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:29.312 [2024-06-10 19:12:43.891462] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18824b0 name raid_bdev1, state offline 00:28:29.312 19:12:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # wait 1796541 00:28:29.312 [2024-06-10 19:12:43.912721] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:29.572 19:12:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:28:29.572 00:28:29.572 real 0m14.746s 00:28:29.572 user 0m26.654s 00:28:29.572 sys 0m2.815s 00:28:29.572 19:12:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:29.572 19:12:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:29.572 ************************************ 00:28:29.572 END TEST raid_superblock_test_md_separate 00:28:29.572 ************************************ 00:28:29.572 19:12:44 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:28:29.572 19:12:44 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:28:29.572 19:12:44 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:28:29.572 19:12:44 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:29.572 19:12:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:29.572 ************************************ 00:28:29.572 START TEST raid_rebuild_test_sb_md_separate 00:28:29.572 ************************************ 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false true 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=1799292 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 1799292 /var/tmp/spdk-raid.sock 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@830 -- # '[' -z 1799292 ']' 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:29.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:29.572 19:12:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:29.572 [2024-06-10 19:12:44.256146] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:28:29.572 [2024-06-10 19:12:44.256201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799292 ] 00:28:29.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:29.572 Zero copy mechanism will not be used. 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.0 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.1 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.2 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.3 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.4 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.5 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.572 EAL: Requested device 0000:b6:01.6 cannot be used 00:28:29.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:01.7 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.0 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.1 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.2 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.3 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.4 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.5 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.6 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b6:02.7 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.0 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.1 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.2 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.3 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.4 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.5 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.6 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:01.7 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.0 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.1 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.2 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.3 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.4 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.5 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.6 cannot be used 00:28:29.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:29.832 EAL: Requested device 0000:b8:02.7 cannot be used 00:28:29.832 [2024-06-10 19:12:44.387843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.832 [2024-06-10 19:12:44.474826] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.832 [2024-06-10 19:12:44.536098] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:29.832 [2024-06-10 19:12:44.536143] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.400 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:30.400 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@863 -- # return 0 00:28:30.400 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:30.400 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:28:30.659 BaseBdev1_malloc 00:28:30.659 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:30.919 [2024-06-10 19:12:45.592999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:30.919 [2024-06-10 19:12:45.593041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.919 [2024-06-10 19:12:45.593061] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19e5cb0 00:28:30.919 [2024-06-10 19:12:45.593072] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.919 [2024-06-10 19:12:45.594388] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.919 [2024-06-10 19:12:45.594415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:30.919 BaseBdev1 00:28:30.919 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:30.919 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:28:31.178 BaseBdev2_malloc 00:28:31.178 19:12:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:31.443 [2024-06-10 19:12:46.051250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:31.443 [2024-06-10 19:12:46.051291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.443 [2024-06-10 19:12:46.051309] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b3dc00 00:28:31.443 [2024-06-10 19:12:46.051320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.443 [2024-06-10 19:12:46.052526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.443 [2024-06-10 19:12:46.052551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:31.443 BaseBdev2 00:28:31.443 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:28:31.702 spare_malloc 00:28:31.702 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:31.961 spare_delay 00:28:31.961 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:31.961 [2024-06-10 19:12:46.710044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:31.961 [2024-06-10 19:12:46.710081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.961 [2024-06-10 19:12:46.710101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b1d380 00:28:31.961 [2024-06-10 19:12:46.710113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.961 [2024-06-10 19:12:46.711323] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.961 [2024-06-10 19:12:46.711351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:31.961 spare 00:28:32.220 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:32.221 [2024-06-10 19:12:46.922625] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:32.221 [2024-06-10 19:12:46.923726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:32.221 [2024-06-10 19:12:46.923875] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b1eee0 00:28:32.221 [2024-06-10 19:12:46.923887] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:32.221 [2024-06-10 19:12:46.923946] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b20ca0 00:28:32.221 [2024-06-10 19:12:46.924050] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b1eee0 00:28:32.221 [2024-06-10 19:12:46.924059] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b1eee0 00:28:32.221 [2024-06-10 19:12:46.924121] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.221 19:12:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.480 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:32.480 "name": "raid_bdev1", 00:28:32.480 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:32.480 "strip_size_kb": 0, 00:28:32.480 "state": "online", 00:28:32.480 "raid_level": "raid1", 00:28:32.480 "superblock": true, 00:28:32.480 "num_base_bdevs": 2, 00:28:32.480 "num_base_bdevs_discovered": 2, 00:28:32.480 "num_base_bdevs_operational": 2, 00:28:32.480 "base_bdevs_list": [ 00:28:32.480 { 00:28:32.480 "name": "BaseBdev1", 00:28:32.480 "uuid": "9de43666-1b3c-5283-88c6-92a7b280db3c", 00:28:32.480 "is_configured": true, 00:28:32.480 "data_offset": 256, 00:28:32.480 "data_size": 7936 00:28:32.480 }, 00:28:32.480 { 00:28:32.480 "name": "BaseBdev2", 00:28:32.480 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:32.480 "is_configured": true, 00:28:32.480 "data_offset": 256, 00:28:32.480 "data_size": 7936 00:28:32.480 } 00:28:32.480 ] 00:28:32.480 }' 00:28:32.480 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:32.480 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:33.048 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:33.048 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:33.307 [2024-06-10 19:12:47.933469] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:33.307 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:28:33.307 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.307 19:12:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:33.566 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:33.825 [2024-06-10 19:12:48.390511] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b20ca0 00:28:33.826 /dev/nbd0 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local i 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # break 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.826 1+0 records in 00:28:33.826 1+0 records out 00:28:33.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253301 s, 16.2 MB/s 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # size=4096 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # return 0 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:33.826 19:12:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:28:34.761 7936+0 records in 00:28:34.761 7936+0 records out 00:28:34.761 32505856 bytes (33 MB, 31 MiB) copied, 0.792613 s, 41.0 MB/s 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:34.761 [2024-06-10 19:12:49.489430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:34.761 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:35.020 [2024-06-10 19:12:49.706037] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.020 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.278 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.278 "name": "raid_bdev1", 00:28:35.278 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:35.278 "strip_size_kb": 0, 00:28:35.278 "state": "online", 00:28:35.278 "raid_level": "raid1", 00:28:35.278 "superblock": true, 00:28:35.278 "num_base_bdevs": 2, 00:28:35.278 "num_base_bdevs_discovered": 1, 00:28:35.278 "num_base_bdevs_operational": 1, 00:28:35.278 "base_bdevs_list": [ 00:28:35.278 { 00:28:35.278 "name": null, 00:28:35.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.278 "is_configured": false, 00:28:35.278 "data_offset": 256, 00:28:35.278 "data_size": 7936 00:28:35.278 }, 00:28:35.278 { 00:28:35.278 "name": "BaseBdev2", 00:28:35.278 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:35.278 "is_configured": true, 00:28:35.278 "data_offset": 256, 00:28:35.278 "data_size": 7936 00:28:35.278 } 00:28:35.278 ] 00:28:35.278 }' 00:28:35.279 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.279 19:12:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:35.846 19:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:36.105 [2024-06-10 19:12:50.744777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:36.105 [2024-06-10 19:12:50.746970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19e3f20 00:28:36.105 [2024-06-10 19:12:50.749088] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.105 19:12:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:37.041 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.041 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.041 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:37.042 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:37.042 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:37.042 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.042 19:12:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.300 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.300 "name": "raid_bdev1", 00:28:37.300 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:37.300 "strip_size_kb": 0, 00:28:37.300 "state": "online", 00:28:37.300 "raid_level": "raid1", 00:28:37.300 "superblock": true, 00:28:37.300 "num_base_bdevs": 2, 00:28:37.300 "num_base_bdevs_discovered": 2, 00:28:37.300 "num_base_bdevs_operational": 2, 00:28:37.300 "process": { 00:28:37.300 "type": "rebuild", 00:28:37.300 "target": "spare", 00:28:37.300 "progress": { 00:28:37.300 "blocks": 3072, 00:28:37.300 "percent": 38 00:28:37.300 } 00:28:37.300 }, 00:28:37.300 "base_bdevs_list": [ 00:28:37.300 { 00:28:37.300 "name": "spare", 00:28:37.300 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:37.300 "is_configured": true, 00:28:37.300 "data_offset": 256, 00:28:37.300 "data_size": 7936 00:28:37.300 }, 00:28:37.300 { 00:28:37.300 "name": "BaseBdev2", 00:28:37.300 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:37.300 "is_configured": true, 00:28:37.300 "data_offset": 256, 00:28:37.300 "data_size": 7936 00:28:37.300 } 00:28:37.300 ] 00:28:37.300 }' 00:28:37.300 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.300 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.300 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:37.559 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:37.559 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:37.559 [2024-06-10 19:12:52.305810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:37.818 [2024-06-10 19:12:52.360952] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:37.818 [2024-06-10 19:12:52.360994] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.818 [2024-06-10 19:12:52.361008] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:37.818 [2024-06-10 19:12:52.361015] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.818 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.076 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.076 "name": "raid_bdev1", 00:28:38.076 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:38.076 "strip_size_kb": 0, 00:28:38.076 "state": "online", 00:28:38.076 "raid_level": "raid1", 00:28:38.076 "superblock": true, 00:28:38.076 "num_base_bdevs": 2, 00:28:38.076 "num_base_bdevs_discovered": 1, 00:28:38.076 "num_base_bdevs_operational": 1, 00:28:38.076 "base_bdevs_list": [ 00:28:38.076 { 00:28:38.076 "name": null, 00:28:38.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.076 "is_configured": false, 00:28:38.076 "data_offset": 256, 00:28:38.076 "data_size": 7936 00:28:38.076 }, 00:28:38.076 { 00:28:38.076 "name": "BaseBdev2", 00:28:38.076 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:38.076 "is_configured": true, 00:28:38.076 "data_offset": 256, 00:28:38.076 "data_size": 7936 00:28:38.076 } 00:28:38.076 ] 00:28:38.076 }' 00:28:38.076 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.076 19:12:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.645 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.905 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.905 "name": "raid_bdev1", 00:28:38.905 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:38.905 "strip_size_kb": 0, 00:28:38.905 "state": "online", 00:28:38.905 "raid_level": "raid1", 00:28:38.905 "superblock": true, 00:28:38.905 "num_base_bdevs": 2, 00:28:38.905 "num_base_bdevs_discovered": 1, 00:28:38.905 "num_base_bdevs_operational": 1, 00:28:38.905 "base_bdevs_list": [ 00:28:38.905 { 00:28:38.905 "name": null, 00:28:38.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.905 "is_configured": false, 00:28:38.905 "data_offset": 256, 00:28:38.905 "data_size": 7936 00:28:38.905 }, 00:28:38.905 { 00:28:38.905 "name": "BaseBdev2", 00:28:38.905 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:38.905 "is_configured": true, 00:28:38.905 "data_offset": 256, 00:28:38.905 "data_size": 7936 00:28:38.905 } 00:28:38.905 ] 00:28:38.905 }' 00:28:38.905 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.905 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:38.905 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.905 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:38.905 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:39.164 [2024-06-10 19:12:53.703412] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:39.164 [2024-06-10 19:12:53.705592] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19e4990 00:28:39.164 [2024-06-10 19:12:53.707024] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:39.164 19:12:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:40.099 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.099 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.099 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:40.099 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:40.099 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.100 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.100 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.360 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.360 "name": "raid_bdev1", 00:28:40.360 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:40.360 "strip_size_kb": 0, 00:28:40.360 "state": "online", 00:28:40.360 "raid_level": "raid1", 00:28:40.360 "superblock": true, 00:28:40.360 "num_base_bdevs": 2, 00:28:40.360 "num_base_bdevs_discovered": 2, 00:28:40.360 "num_base_bdevs_operational": 2, 00:28:40.360 "process": { 00:28:40.360 "type": "rebuild", 00:28:40.360 "target": "spare", 00:28:40.360 "progress": { 00:28:40.360 "blocks": 3072, 00:28:40.360 "percent": 38 00:28:40.360 } 00:28:40.360 }, 00:28:40.360 "base_bdevs_list": [ 00:28:40.360 { 00:28:40.360 "name": "spare", 00:28:40.360 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:40.360 "is_configured": true, 00:28:40.360 "data_offset": 256, 00:28:40.360 "data_size": 7936 00:28:40.360 }, 00:28:40.360 { 00:28:40.360 "name": "BaseBdev2", 00:28:40.360 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:40.360 "is_configured": true, 00:28:40.360 "data_offset": 256, 00:28:40.360 "data_size": 7936 00:28:40.360 } 00:28:40.360 ] 00:28:40.360 }' 00:28:40.360 19:12:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:40.361 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1000 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.361 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.696 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.696 "name": "raid_bdev1", 00:28:40.696 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:40.696 "strip_size_kb": 0, 00:28:40.696 "state": "online", 00:28:40.696 "raid_level": "raid1", 00:28:40.696 "superblock": true, 00:28:40.696 "num_base_bdevs": 2, 00:28:40.696 "num_base_bdevs_discovered": 2, 00:28:40.696 "num_base_bdevs_operational": 2, 00:28:40.696 "process": { 00:28:40.696 "type": "rebuild", 00:28:40.696 "target": "spare", 00:28:40.696 "progress": { 00:28:40.696 "blocks": 3840, 00:28:40.696 "percent": 48 00:28:40.696 } 00:28:40.696 }, 00:28:40.696 "base_bdevs_list": [ 00:28:40.696 { 00:28:40.696 "name": "spare", 00:28:40.696 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:40.696 "is_configured": true, 00:28:40.696 "data_offset": 256, 00:28:40.696 "data_size": 7936 00:28:40.696 }, 00:28:40.696 { 00:28:40.696 "name": "BaseBdev2", 00:28:40.696 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:40.696 "is_configured": true, 00:28:40.696 "data_offset": 256, 00:28:40.696 "data_size": 7936 00:28:40.696 } 00:28:40.696 ] 00:28:40.696 }' 00:28:40.696 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.696 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.696 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.696 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.697 19:12:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:41.632 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:41.632 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.632 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.632 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.632 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.632 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.892 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.892 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.892 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.892 "name": "raid_bdev1", 00:28:41.892 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:41.892 "strip_size_kb": 0, 00:28:41.892 "state": "online", 00:28:41.892 "raid_level": "raid1", 00:28:41.892 "superblock": true, 00:28:41.892 "num_base_bdevs": 2, 00:28:41.892 "num_base_bdevs_discovered": 2, 00:28:41.892 "num_base_bdevs_operational": 2, 00:28:41.892 "process": { 00:28:41.892 "type": "rebuild", 00:28:41.892 "target": "spare", 00:28:41.892 "progress": { 00:28:41.892 "blocks": 7168, 00:28:41.892 "percent": 90 00:28:41.892 } 00:28:41.892 }, 00:28:41.892 "base_bdevs_list": [ 00:28:41.892 { 00:28:41.892 "name": "spare", 00:28:41.892 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:41.892 "is_configured": true, 00:28:41.892 "data_offset": 256, 00:28:41.892 "data_size": 7936 00:28:41.892 }, 00:28:41.892 { 00:28:41.892 "name": "BaseBdev2", 00:28:41.892 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:41.892 "is_configured": true, 00:28:41.892 "data_offset": 256, 00:28:41.892 "data_size": 7936 00:28:41.892 } 00:28:41.892 ] 00:28:41.892 }' 00:28:41.892 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.150 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.150 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.150 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.150 19:12:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:42.150 [2024-06-10 19:12:56.829667] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:42.150 [2024-06-10 19:12:56.829717] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:42.150 [2024-06-10 19:12:56.829791] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.086 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.345 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.345 "name": "raid_bdev1", 00:28:43.345 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:43.345 "strip_size_kb": 0, 00:28:43.345 "state": "online", 00:28:43.345 "raid_level": "raid1", 00:28:43.345 "superblock": true, 00:28:43.345 "num_base_bdevs": 2, 00:28:43.345 "num_base_bdevs_discovered": 2, 00:28:43.345 "num_base_bdevs_operational": 2, 00:28:43.345 "base_bdevs_list": [ 00:28:43.345 { 00:28:43.345 "name": "spare", 00:28:43.345 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:43.345 "is_configured": true, 00:28:43.345 "data_offset": 256, 00:28:43.345 "data_size": 7936 00:28:43.345 }, 00:28:43.345 { 00:28:43.345 "name": "BaseBdev2", 00:28:43.345 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:43.345 "is_configured": true, 00:28:43.345 "data_offset": 256, 00:28:43.345 "data_size": 7936 00:28:43.345 } 00:28:43.345 ] 00:28:43.345 }' 00:28:43.345 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.345 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:43.345 19:12:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.345 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.604 "name": "raid_bdev1", 00:28:43.604 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:43.604 "strip_size_kb": 0, 00:28:43.604 "state": "online", 00:28:43.604 "raid_level": "raid1", 00:28:43.604 "superblock": true, 00:28:43.604 "num_base_bdevs": 2, 00:28:43.604 "num_base_bdevs_discovered": 2, 00:28:43.604 "num_base_bdevs_operational": 2, 00:28:43.604 "base_bdevs_list": [ 00:28:43.604 { 00:28:43.604 "name": "spare", 00:28:43.604 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:43.604 "is_configured": true, 00:28:43.604 "data_offset": 256, 00:28:43.604 "data_size": 7936 00:28:43.604 }, 00:28:43.604 { 00:28:43.604 "name": "BaseBdev2", 00:28:43.604 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:43.604 "is_configured": true, 00:28:43.604 "data_offset": 256, 00:28:43.604 "data_size": 7936 00:28:43.604 } 00:28:43.604 ] 00:28:43.604 }' 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.604 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.863 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.863 "name": "raid_bdev1", 00:28:43.863 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:43.863 "strip_size_kb": 0, 00:28:43.863 "state": "online", 00:28:43.863 "raid_level": "raid1", 00:28:43.863 "superblock": true, 00:28:43.863 "num_base_bdevs": 2, 00:28:43.863 "num_base_bdevs_discovered": 2, 00:28:43.863 "num_base_bdevs_operational": 2, 00:28:43.863 "base_bdevs_list": [ 00:28:43.863 { 00:28:43.863 "name": "spare", 00:28:43.863 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:43.863 "is_configured": true, 00:28:43.863 "data_offset": 256, 00:28:43.863 "data_size": 7936 00:28:43.863 }, 00:28:43.863 { 00:28:43.863 "name": "BaseBdev2", 00:28:43.863 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:43.863 "is_configured": true, 00:28:43.863 "data_offset": 256, 00:28:43.863 "data_size": 7936 00:28:43.863 } 00:28:43.863 ] 00:28:43.863 }' 00:28:43.863 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.863 19:12:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:44.430 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:44.688 [2024-06-10 19:12:59.367244] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:44.688 [2024-06-10 19:12:59.367268] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:44.688 [2024-06-10 19:12:59.367318] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:44.688 [2024-06-10 19:12:59.367366] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:44.688 [2024-06-10 19:12:59.367377] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b1eee0 name raid_bdev1, state offline 00:28:44.689 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.689 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:44.947 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:45.206 /dev/nbd0 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local i 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # break 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:45.206 1+0 records in 00:28:45.206 1+0 records out 00:28:45.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268728 s, 15.2 MB/s 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # size=4096 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # return 0 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:45.206 19:12:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:45.464 /dev/nbd1 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local i 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # break 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:45.464 1+0 records in 00:28:45.464 1+0 records out 00:28:45.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316084 s, 13.0 MB/s 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # size=4096 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # return 0 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:45.464 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:45.723 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:45.982 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:46.240 19:13:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:46.499 [2024-06-10 19:13:01.158599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:46.499 [2024-06-10 19:13:01.158642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:46.499 [2024-06-10 19:13:01.158661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b80030 00:28:46.499 [2024-06-10 19:13:01.158672] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:46.499 [2024-06-10 19:13:01.160048] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:46.499 [2024-06-10 19:13:01.160076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:46.499 [2024-06-10 19:13:01.160134] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:46.499 [2024-06-10 19:13:01.160159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:46.499 [2024-06-10 19:13:01.160243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:46.499 spare 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.499 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.758 [2024-06-10 19:13:01.260544] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b1fd70 00:28:46.758 [2024-06-10 19:13:01.260559] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:46.758 [2024-06-10 19:13:01.260635] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b229d0 00:28:46.758 [2024-06-10 19:13:01.260752] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b1fd70 00:28:46.758 [2024-06-10 19:13:01.260773] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b1fd70 00:28:46.758 [2024-06-10 19:13:01.260846] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:46.758 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:46.758 "name": "raid_bdev1", 00:28:46.758 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:46.758 "strip_size_kb": 0, 00:28:46.758 "state": "online", 00:28:46.758 "raid_level": "raid1", 00:28:46.758 "superblock": true, 00:28:46.758 "num_base_bdevs": 2, 00:28:46.758 "num_base_bdevs_discovered": 2, 00:28:46.758 "num_base_bdevs_operational": 2, 00:28:46.758 "base_bdevs_list": [ 00:28:46.758 { 00:28:46.758 "name": "spare", 00:28:46.758 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:46.758 "is_configured": true, 00:28:46.758 "data_offset": 256, 00:28:46.758 "data_size": 7936 00:28:46.758 }, 00:28:46.758 { 00:28:46.758 "name": "BaseBdev2", 00:28:46.758 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:46.758 "is_configured": true, 00:28:46.758 "data_offset": 256, 00:28:46.758 "data_size": 7936 00:28:46.758 } 00:28:46.758 ] 00:28:46.758 }' 00:28:46.758 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:46.758 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.325 19:13:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.582 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:47.582 "name": "raid_bdev1", 00:28:47.582 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:47.582 "strip_size_kb": 0, 00:28:47.582 "state": "online", 00:28:47.582 "raid_level": "raid1", 00:28:47.582 "superblock": true, 00:28:47.582 "num_base_bdevs": 2, 00:28:47.582 "num_base_bdevs_discovered": 2, 00:28:47.582 "num_base_bdevs_operational": 2, 00:28:47.582 "base_bdevs_list": [ 00:28:47.582 { 00:28:47.582 "name": "spare", 00:28:47.582 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:47.582 "is_configured": true, 00:28:47.582 "data_offset": 256, 00:28:47.582 "data_size": 7936 00:28:47.582 }, 00:28:47.582 { 00:28:47.582 "name": "BaseBdev2", 00:28:47.582 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:47.582 "is_configured": true, 00:28:47.582 "data_offset": 256, 00:28:47.582 "data_size": 7936 00:28:47.582 } 00:28:47.582 ] 00:28:47.582 }' 00:28:47.582 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:47.582 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:47.582 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:47.583 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:47.583 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.583 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:47.841 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:47.841 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:48.101 [2024-06-10 19:13:02.734854] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.101 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.360 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.360 "name": "raid_bdev1", 00:28:48.360 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:48.360 "strip_size_kb": 0, 00:28:48.360 "state": "online", 00:28:48.360 "raid_level": "raid1", 00:28:48.360 "superblock": true, 00:28:48.360 "num_base_bdevs": 2, 00:28:48.360 "num_base_bdevs_discovered": 1, 00:28:48.360 "num_base_bdevs_operational": 1, 00:28:48.360 "base_bdevs_list": [ 00:28:48.360 { 00:28:48.360 "name": null, 00:28:48.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.360 "is_configured": false, 00:28:48.360 "data_offset": 256, 00:28:48.360 "data_size": 7936 00:28:48.360 }, 00:28:48.360 { 00:28:48.360 "name": "BaseBdev2", 00:28:48.360 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:48.360 "is_configured": true, 00:28:48.360 "data_offset": 256, 00:28:48.360 "data_size": 7936 00:28:48.360 } 00:28:48.360 ] 00:28:48.360 }' 00:28:48.360 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.360 19:13:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:48.927 19:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:49.186 [2024-06-10 19:13:03.769613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.186 [2024-06-10 19:13:03.769741] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:49.186 [2024-06-10 19:13:03.769756] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:49.186 [2024-06-10 19:13:03.769782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.186 [2024-06-10 19:13:03.771875] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19e4990 00:28:49.186 [2024-06-10 19:13:03.773110] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:49.186 19:13:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.122 19:13:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.381 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:50.381 "name": "raid_bdev1", 00:28:50.381 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:50.381 "strip_size_kb": 0, 00:28:50.381 "state": "online", 00:28:50.381 "raid_level": "raid1", 00:28:50.381 "superblock": true, 00:28:50.381 "num_base_bdevs": 2, 00:28:50.381 "num_base_bdevs_discovered": 2, 00:28:50.381 "num_base_bdevs_operational": 2, 00:28:50.381 "process": { 00:28:50.381 "type": "rebuild", 00:28:50.381 "target": "spare", 00:28:50.381 "progress": { 00:28:50.381 "blocks": 3072, 00:28:50.381 "percent": 38 00:28:50.381 } 00:28:50.381 }, 00:28:50.381 "base_bdevs_list": [ 00:28:50.381 { 00:28:50.381 "name": "spare", 00:28:50.381 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:50.381 "is_configured": true, 00:28:50.381 "data_offset": 256, 00:28:50.381 "data_size": 7936 00:28:50.381 }, 00:28:50.381 { 00:28:50.381 "name": "BaseBdev2", 00:28:50.381 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:50.381 "is_configured": true, 00:28:50.381 "data_offset": 256, 00:28:50.381 "data_size": 7936 00:28:50.381 } 00:28:50.381 ] 00:28:50.381 }' 00:28:50.381 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:50.381 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.381 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:50.381 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.381 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:50.640 [2024-06-10 19:13:05.318288] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.640 [2024-06-10 19:13:05.385019] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:50.640 [2024-06-10 19:13:05.385059] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.640 [2024-06-10 19:13:05.385073] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.640 [2024-06-10 19:13:05.385080] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.899 "name": "raid_bdev1", 00:28:50.899 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:50.899 "strip_size_kb": 0, 00:28:50.899 "state": "online", 00:28:50.899 "raid_level": "raid1", 00:28:50.899 "superblock": true, 00:28:50.899 "num_base_bdevs": 2, 00:28:50.899 "num_base_bdevs_discovered": 1, 00:28:50.899 "num_base_bdevs_operational": 1, 00:28:50.899 "base_bdevs_list": [ 00:28:50.899 { 00:28:50.899 "name": null, 00:28:50.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.899 "is_configured": false, 00:28:50.899 "data_offset": 256, 00:28:50.899 "data_size": 7936 00:28:50.899 }, 00:28:50.899 { 00:28:50.899 "name": "BaseBdev2", 00:28:50.899 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:50.899 "is_configured": true, 00:28:50.899 "data_offset": 256, 00:28:50.899 "data_size": 7936 00:28:50.899 } 00:28:50.899 ] 00:28:50.899 }' 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.899 19:13:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:51.467 19:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:51.725 [2024-06-10 19:13:06.406541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:51.725 [2024-06-10 19:13:06.406591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:51.725 [2024-06-10 19:13:06.406614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b1fa50 00:28:51.725 [2024-06-10 19:13:06.406626] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:51.725 [2024-06-10 19:13:06.406820] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:51.725 [2024-06-10 19:13:06.406836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:51.725 [2024-06-10 19:13:06.406888] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:51.725 [2024-06-10 19:13:06.406899] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:51.725 [2024-06-10 19:13:06.406908] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:51.725 [2024-06-10 19:13:06.406925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:51.725 [2024-06-10 19:13:06.408994] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b228f0 00:28:51.725 [2024-06-10 19:13:06.410344] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.725 spare 00:28:51.725 19:13:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.102 "name": "raid_bdev1", 00:28:53.102 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:53.102 "strip_size_kb": 0, 00:28:53.102 "state": "online", 00:28:53.102 "raid_level": "raid1", 00:28:53.102 "superblock": true, 00:28:53.102 "num_base_bdevs": 2, 00:28:53.102 "num_base_bdevs_discovered": 2, 00:28:53.102 "num_base_bdevs_operational": 2, 00:28:53.102 "process": { 00:28:53.102 "type": "rebuild", 00:28:53.102 "target": "spare", 00:28:53.102 "progress": { 00:28:53.102 "blocks": 3072, 00:28:53.102 "percent": 38 00:28:53.102 } 00:28:53.102 }, 00:28:53.102 "base_bdevs_list": [ 00:28:53.102 { 00:28:53.102 "name": "spare", 00:28:53.102 "uuid": "fc5cce0e-7fa9-5301-aef2-643c1de03d50", 00:28:53.102 "is_configured": true, 00:28:53.102 "data_offset": 256, 00:28:53.102 "data_size": 7936 00:28:53.102 }, 00:28:53.102 { 00:28:53.102 "name": "BaseBdev2", 00:28:53.102 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:53.102 "is_configured": true, 00:28:53.102 "data_offset": 256, 00:28:53.102 "data_size": 7936 00:28:53.102 } 00:28:53.102 ] 00:28:53.102 }' 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.102 19:13:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:53.361 [2024-06-10 19:13:07.947782] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.361 [2024-06-10 19:13:08.022136] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:53.361 [2024-06-10 19:13:08.022179] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.361 [2024-06-10 19:13:08.022193] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.361 [2024-06-10 19:13:08.022200] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.361 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.621 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:53.621 "name": "raid_bdev1", 00:28:53.621 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:53.621 "strip_size_kb": 0, 00:28:53.621 "state": "online", 00:28:53.621 "raid_level": "raid1", 00:28:53.621 "superblock": true, 00:28:53.621 "num_base_bdevs": 2, 00:28:53.621 "num_base_bdevs_discovered": 1, 00:28:53.621 "num_base_bdevs_operational": 1, 00:28:53.621 "base_bdevs_list": [ 00:28:53.621 { 00:28:53.621 "name": null, 00:28:53.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.621 "is_configured": false, 00:28:53.621 "data_offset": 256, 00:28:53.621 "data_size": 7936 00:28:53.621 }, 00:28:53.621 { 00:28:53.621 "name": "BaseBdev2", 00:28:53.621 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:53.621 "is_configured": true, 00:28:53.621 "data_offset": 256, 00:28:53.621 "data_size": 7936 00:28:53.621 } 00:28:53.621 ] 00:28:53.621 }' 00:28:53.621 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:53.621 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:54.189 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:54.190 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.190 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:54.190 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:54.190 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.190 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.190 19:13:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.449 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.449 "name": "raid_bdev1", 00:28:54.449 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:54.449 "strip_size_kb": 0, 00:28:54.449 "state": "online", 00:28:54.449 "raid_level": "raid1", 00:28:54.449 "superblock": true, 00:28:54.449 "num_base_bdevs": 2, 00:28:54.449 "num_base_bdevs_discovered": 1, 00:28:54.449 "num_base_bdevs_operational": 1, 00:28:54.449 "base_bdevs_list": [ 00:28:54.449 { 00:28:54.449 "name": null, 00:28:54.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.449 "is_configured": false, 00:28:54.449 "data_offset": 256, 00:28:54.449 "data_size": 7936 00:28:54.449 }, 00:28:54.449 { 00:28:54.449 "name": "BaseBdev2", 00:28:54.449 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:54.449 "is_configured": true, 00:28:54.449 "data_offset": 256, 00:28:54.449 "data_size": 7936 00:28:54.449 } 00:28:54.449 ] 00:28:54.449 }' 00:28:54.449 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.449 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:54.449 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.449 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:54.449 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:54.708 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:54.967 [2024-06-10 19:13:09.572765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:54.967 [2024-06-10 19:13:09.572807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.967 [2024-06-10 19:13:09.572825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b22a00 00:28:54.967 [2024-06-10 19:13:09.572837] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.967 [2024-06-10 19:13:09.573005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.967 [2024-06-10 19:13:09.573020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:54.967 [2024-06-10 19:13:09.573061] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:54.967 [2024-06-10 19:13:09.573072] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:54.967 [2024-06-10 19:13:09.573081] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:54.967 BaseBdev1 00:28:54.967 19:13:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.902 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.161 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.161 "name": "raid_bdev1", 00:28:56.161 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:56.161 "strip_size_kb": 0, 00:28:56.161 "state": "online", 00:28:56.161 "raid_level": "raid1", 00:28:56.161 "superblock": true, 00:28:56.161 "num_base_bdevs": 2, 00:28:56.161 "num_base_bdevs_discovered": 1, 00:28:56.161 "num_base_bdevs_operational": 1, 00:28:56.161 "base_bdevs_list": [ 00:28:56.161 { 00:28:56.161 "name": null, 00:28:56.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.161 "is_configured": false, 00:28:56.161 "data_offset": 256, 00:28:56.161 "data_size": 7936 00:28:56.161 }, 00:28:56.161 { 00:28:56.161 "name": "BaseBdev2", 00:28:56.161 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:56.161 "is_configured": true, 00:28:56.161 "data_offset": 256, 00:28:56.161 "data_size": 7936 00:28:56.161 } 00:28:56.161 ] 00:28:56.161 }' 00:28:56.161 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.161 19:13:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.728 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.988 "name": "raid_bdev1", 00:28:56.988 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:56.988 "strip_size_kb": 0, 00:28:56.988 "state": "online", 00:28:56.988 "raid_level": "raid1", 00:28:56.988 "superblock": true, 00:28:56.988 "num_base_bdevs": 2, 00:28:56.988 "num_base_bdevs_discovered": 1, 00:28:56.988 "num_base_bdevs_operational": 1, 00:28:56.988 "base_bdevs_list": [ 00:28:56.988 { 00:28:56.988 "name": null, 00:28:56.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.988 "is_configured": false, 00:28:56.988 "data_offset": 256, 00:28:56.988 "data_size": 7936 00:28:56.988 }, 00:28:56.988 { 00:28:56.988 "name": "BaseBdev2", 00:28:56.988 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:56.988 "is_configured": true, 00:28:56.988 "data_offset": 256, 00:28:56.988 "data_size": 7936 00:28:56.988 } 00:28:56.988 ] 00:28:56.988 }' 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@649 -- # local es=0 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:56.988 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.247 [2024-06-10 19:13:11.931003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.247 [2024-06-10 19:13:11.931110] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:57.247 [2024-06-10 19:13:11.931125] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:57.247 request: 00:28:57.247 { 00:28:57.247 "raid_bdev": "raid_bdev1", 00:28:57.247 "base_bdev": "BaseBdev1", 00:28:57.247 "method": "bdev_raid_add_base_bdev", 00:28:57.247 "req_id": 1 00:28:57.247 } 00:28:57.247 Got JSON-RPC error response 00:28:57.247 response: 00:28:57.247 { 00:28:57.247 "code": -22, 00:28:57.247 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:57.247 } 00:28:57.247 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # es=1 00:28:57.247 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:57.247 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:57.247 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:57.247 19:13:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:58.623 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:58.623 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.623 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.623 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.623 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.624 19:13:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.624 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.624 "name": "raid_bdev1", 00:28:58.624 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:58.624 "strip_size_kb": 0, 00:28:58.624 "state": "online", 00:28:58.624 "raid_level": "raid1", 00:28:58.624 "superblock": true, 00:28:58.624 "num_base_bdevs": 2, 00:28:58.624 "num_base_bdevs_discovered": 1, 00:28:58.624 "num_base_bdevs_operational": 1, 00:28:58.624 "base_bdevs_list": [ 00:28:58.624 { 00:28:58.624 "name": null, 00:28:58.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.624 "is_configured": false, 00:28:58.624 "data_offset": 256, 00:28:58.624 "data_size": 7936 00:28:58.624 }, 00:28:58.624 { 00:28:58.624 "name": "BaseBdev2", 00:28:58.624 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:58.624 "is_configured": true, 00:28:58.624 "data_offset": 256, 00:28:58.624 "data_size": 7936 00:28:58.624 } 00:28:58.624 ] 00:28:58.624 }' 00:28:58.624 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.624 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.192 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.451 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.451 "name": "raid_bdev1", 00:28:59.451 "uuid": "a45b1ced-2d85-4e48-b940-7851ae28da01", 00:28:59.451 "strip_size_kb": 0, 00:28:59.451 "state": "online", 00:28:59.451 "raid_level": "raid1", 00:28:59.451 "superblock": true, 00:28:59.451 "num_base_bdevs": 2, 00:28:59.451 "num_base_bdevs_discovered": 1, 00:28:59.451 "num_base_bdevs_operational": 1, 00:28:59.451 "base_bdevs_list": [ 00:28:59.451 { 00:28:59.451 "name": null, 00:28:59.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.451 "is_configured": false, 00:28:59.451 "data_offset": 256, 00:28:59.451 "data_size": 7936 00:28:59.451 }, 00:28:59.451 { 00:28:59.451 "name": "BaseBdev2", 00:28:59.451 "uuid": "6fa7f07e-152f-58b6-a3a9-df7b74201949", 00:28:59.451 "is_configured": true, 00:28:59.451 "data_offset": 256, 00:28:59.451 "data_size": 7936 00:28:59.451 } 00:28:59.451 ] 00:28:59.451 }' 00:28:59.451 19:13:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.451 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:59.451 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:59.451 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 1799292 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@949 -- # '[' -z 1799292 ']' 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # kill -0 1799292 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # uname 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1799292 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1799292' 00:28:59.452 killing process with pid 1799292 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # kill 1799292 00:28:59.452 Received shutdown signal, test time was about 60.000000 seconds 00:28:59.452 00:28:59.452 Latency(us) 00:28:59.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.452 =================================================================================================================== 00:28:59.452 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:59.452 [2024-06-10 19:13:14.119954] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:59.452 [2024-06-10 19:13:14.120032] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:59.452 [2024-06-10 19:13:14.120073] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:59.452 [2024-06-10 19:13:14.120084] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b1fd70 name raid_bdev1, state offline 00:28:59.452 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # wait 1799292 00:28:59.452 [2024-06-10 19:13:14.148996] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:59.712 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:28:59.712 00:28:59.712 real 0m30.148s 00:28:59.712 user 0m46.493s 00:28:59.712 sys 0m4.952s 00:28:59.712 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.712 19:13:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:59.712 ************************************ 00:28:59.712 END TEST raid_rebuild_test_sb_md_separate 00:28:59.712 ************************************ 00:28:59.712 19:13:14 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:28:59.712 19:13:14 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:28:59.712 19:13:14 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:28:59.712 19:13:14 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:59.712 19:13:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:59.712 ************************************ 00:28:59.712 START TEST raid_state_function_test_sb_md_interleaved 00:28:59.712 ************************************ 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=1804895 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1804895' 00:28:59.712 Process raid pid: 1804895 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 1804895 /var/tmp/spdk-raid.sock 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 1804895 ']' 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:59.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:59.712 19:13:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:59.972 [2024-06-10 19:13:14.495048] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:28:59.972 [2024-06-10 19:13:14.495106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.0 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.1 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.2 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.3 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.4 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.5 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.6 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:01.7 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.0 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.1 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.2 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.3 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.4 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.5 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.6 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b6:02.7 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.0 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.1 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.2 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.3 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.4 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.5 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.6 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:01.7 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.0 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.1 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.2 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.3 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.4 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.5 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.6 cannot be used 00:28:59.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.972 EAL: Requested device 0000:b8:02.7 cannot be used 00:28:59.972 [2024-06-10 19:13:14.629402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.972 [2024-06-10 19:13:14.716966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.232 [2024-06-10 19:13:14.778481] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:00.232 [2024-06-10 19:13:14.778517] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:00.800 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:00.800 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:29:00.800 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:29:01.059 [2024-06-10 19:13:15.590125] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:01.059 [2024-06-10 19:13:15.590162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:01.059 [2024-06-10 19:13:15.590172] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:01.059 [2024-06-10 19:13:15.590183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.059 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:01.319 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:01.319 "name": "Existed_Raid", 00:29:01.319 "uuid": "8c1d71be-1b32-4c27-b751-27aba0e5e994", 00:29:01.319 "strip_size_kb": 0, 00:29:01.319 "state": "configuring", 00:29:01.319 "raid_level": "raid1", 00:29:01.319 "superblock": true, 00:29:01.319 "num_base_bdevs": 2, 00:29:01.319 "num_base_bdevs_discovered": 0, 00:29:01.319 "num_base_bdevs_operational": 2, 00:29:01.319 "base_bdevs_list": [ 00:29:01.319 { 00:29:01.319 "name": "BaseBdev1", 00:29:01.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.319 "is_configured": false, 00:29:01.319 "data_offset": 0, 00:29:01.319 "data_size": 0 00:29:01.319 }, 00:29:01.319 { 00:29:01.319 "name": "BaseBdev2", 00:29:01.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.319 "is_configured": false, 00:29:01.319 "data_offset": 0, 00:29:01.319 "data_size": 0 00:29:01.319 } 00:29:01.319 ] 00:29:01.319 }' 00:29:01.319 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:01.319 19:13:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:01.936 19:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:01.936 [2024-06-10 19:13:16.612696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:01.936 [2024-06-10 19:13:16.612722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x146ff10 name Existed_Raid, state configuring 00:29:01.936 19:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:29:02.226 [2024-06-10 19:13:16.833289] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:02.226 [2024-06-10 19:13:16.833318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:02.226 [2024-06-10 19:13:16.833327] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:02.226 [2024-06-10 19:13:16.833338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:02.226 19:13:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:29:02.485 [2024-06-10 19:13:17.055571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:02.485 BaseBdev1 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local i 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:29:02.485 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:02.744 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:03.004 [ 00:29:03.004 { 00:29:03.004 "name": "BaseBdev1", 00:29:03.004 "aliases": [ 00:29:03.004 "7e034a1d-703a-4f2b-92cf-8f846e160f8e" 00:29:03.004 ], 00:29:03.004 "product_name": "Malloc disk", 00:29:03.004 "block_size": 4128, 00:29:03.004 "num_blocks": 8192, 00:29:03.004 "uuid": "7e034a1d-703a-4f2b-92cf-8f846e160f8e", 00:29:03.004 "md_size": 32, 00:29:03.004 "md_interleave": true, 00:29:03.004 "dif_type": 0, 00:29:03.004 "assigned_rate_limits": { 00:29:03.004 "rw_ios_per_sec": 0, 00:29:03.004 "rw_mbytes_per_sec": 0, 00:29:03.004 "r_mbytes_per_sec": 0, 00:29:03.004 "w_mbytes_per_sec": 0 00:29:03.004 }, 00:29:03.004 "claimed": true, 00:29:03.004 "claim_type": "exclusive_write", 00:29:03.004 "zoned": false, 00:29:03.004 "supported_io_types": { 00:29:03.004 "read": true, 00:29:03.004 "write": true, 00:29:03.004 "unmap": true, 00:29:03.004 "write_zeroes": true, 00:29:03.004 "flush": true, 00:29:03.004 "reset": true, 00:29:03.004 "compare": false, 00:29:03.004 "compare_and_write": false, 00:29:03.004 "abort": true, 00:29:03.004 "nvme_admin": false, 00:29:03.004 "nvme_io": false 00:29:03.004 }, 00:29:03.004 "memory_domains": [ 00:29:03.004 { 00:29:03.004 "dma_device_id": "system", 00:29:03.004 "dma_device_type": 1 00:29:03.004 }, 00:29:03.004 { 00:29:03.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:03.004 "dma_device_type": 2 00:29:03.004 } 00:29:03.004 ], 00:29:03.004 "driver_specific": {} 00:29:03.004 } 00:29:03.004 ] 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # return 0 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.004 "name": "Existed_Raid", 00:29:03.004 "uuid": "9aaa91dc-9343-4983-a74a-fdd63e2fa14a", 00:29:03.004 "strip_size_kb": 0, 00:29:03.004 "state": "configuring", 00:29:03.004 "raid_level": "raid1", 00:29:03.004 "superblock": true, 00:29:03.004 "num_base_bdevs": 2, 00:29:03.004 "num_base_bdevs_discovered": 1, 00:29:03.004 "num_base_bdevs_operational": 2, 00:29:03.004 "base_bdevs_list": [ 00:29:03.004 { 00:29:03.004 "name": "BaseBdev1", 00:29:03.004 "uuid": "7e034a1d-703a-4f2b-92cf-8f846e160f8e", 00:29:03.004 "is_configured": true, 00:29:03.004 "data_offset": 256, 00:29:03.004 "data_size": 7936 00:29:03.004 }, 00:29:03.004 { 00:29:03.004 "name": "BaseBdev2", 00:29:03.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.004 "is_configured": false, 00:29:03.004 "data_offset": 0, 00:29:03.004 "data_size": 0 00:29:03.004 } 00:29:03.004 ] 00:29:03.004 }' 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.004 19:13:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:03.573 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:03.832 [2024-06-10 19:13:18.511450] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:03.832 [2024-06-10 19:13:18.511487] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x146f800 name Existed_Raid, state configuring 00:29:03.832 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:29:04.091 [2024-06-10 19:13:18.736069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:04.091 [2024-06-10 19:13:18.737467] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:04.091 [2024-06-10 19:13:18.737497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.091 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:04.350 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.350 "name": "Existed_Raid", 00:29:04.350 "uuid": "2b8ef053-706d-411a-bf91-d8da00fd650c", 00:29:04.350 "strip_size_kb": 0, 00:29:04.350 "state": "configuring", 00:29:04.350 "raid_level": "raid1", 00:29:04.350 "superblock": true, 00:29:04.350 "num_base_bdevs": 2, 00:29:04.350 "num_base_bdevs_discovered": 1, 00:29:04.350 "num_base_bdevs_operational": 2, 00:29:04.350 "base_bdevs_list": [ 00:29:04.350 { 00:29:04.350 "name": "BaseBdev1", 00:29:04.350 "uuid": "7e034a1d-703a-4f2b-92cf-8f846e160f8e", 00:29:04.350 "is_configured": true, 00:29:04.350 "data_offset": 256, 00:29:04.350 "data_size": 7936 00:29:04.350 }, 00:29:04.350 { 00:29:04.350 "name": "BaseBdev2", 00:29:04.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.350 "is_configured": false, 00:29:04.350 "data_offset": 0, 00:29:04.350 "data_size": 0 00:29:04.350 } 00:29:04.350 ] 00:29:04.350 }' 00:29:04.350 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.350 19:13:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:04.918 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:29:05.178 [2024-06-10 19:13:19.774213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:05.178 [2024-06-10 19:13:19.774329] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14f16d0 00:29:05.178 [2024-06-10 19:13:19.774345] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:05.178 [2024-06-10 19:13:19.774402] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1467600 00:29:05.178 [2024-06-10 19:13:19.774467] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14f16d0 00:29:05.178 [2024-06-10 19:13:19.774476] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14f16d0 00:29:05.178 [2024-06-10 19:13:19.774525] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.178 BaseBdev2 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local i 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:29:05.178 19:13:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:05.437 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:05.437 [ 00:29:05.437 { 00:29:05.437 "name": "BaseBdev2", 00:29:05.437 "aliases": [ 00:29:05.437 "f73357c1-310d-4027-a932-e13d85b922e7" 00:29:05.437 ], 00:29:05.437 "product_name": "Malloc disk", 00:29:05.437 "block_size": 4128, 00:29:05.437 "num_blocks": 8192, 00:29:05.437 "uuid": "f73357c1-310d-4027-a932-e13d85b922e7", 00:29:05.437 "md_size": 32, 00:29:05.437 "md_interleave": true, 00:29:05.437 "dif_type": 0, 00:29:05.437 "assigned_rate_limits": { 00:29:05.437 "rw_ios_per_sec": 0, 00:29:05.437 "rw_mbytes_per_sec": 0, 00:29:05.437 "r_mbytes_per_sec": 0, 00:29:05.437 "w_mbytes_per_sec": 0 00:29:05.437 }, 00:29:05.437 "claimed": true, 00:29:05.437 "claim_type": "exclusive_write", 00:29:05.437 "zoned": false, 00:29:05.437 "supported_io_types": { 00:29:05.437 "read": true, 00:29:05.437 "write": true, 00:29:05.437 "unmap": true, 00:29:05.437 "write_zeroes": true, 00:29:05.437 "flush": true, 00:29:05.437 "reset": true, 00:29:05.437 "compare": false, 00:29:05.437 "compare_and_write": false, 00:29:05.437 "abort": true, 00:29:05.437 "nvme_admin": false, 00:29:05.437 "nvme_io": false 00:29:05.437 }, 00:29:05.437 "memory_domains": [ 00:29:05.437 { 00:29:05.437 "dma_device_id": "system", 00:29:05.437 "dma_device_type": 1 00:29:05.437 }, 00:29:05.437 { 00:29:05.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:05.437 "dma_device_type": 2 00:29:05.437 } 00:29:05.437 ], 00:29:05.437 "driver_specific": {} 00:29:05.437 } 00:29:05.437 ] 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # return 0 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.697 "name": "Existed_Raid", 00:29:05.697 "uuid": "2b8ef053-706d-411a-bf91-d8da00fd650c", 00:29:05.697 "strip_size_kb": 0, 00:29:05.697 "state": "online", 00:29:05.697 "raid_level": "raid1", 00:29:05.697 "superblock": true, 00:29:05.697 "num_base_bdevs": 2, 00:29:05.697 "num_base_bdevs_discovered": 2, 00:29:05.697 "num_base_bdevs_operational": 2, 00:29:05.697 "base_bdevs_list": [ 00:29:05.697 { 00:29:05.697 "name": "BaseBdev1", 00:29:05.697 "uuid": "7e034a1d-703a-4f2b-92cf-8f846e160f8e", 00:29:05.697 "is_configured": true, 00:29:05.697 "data_offset": 256, 00:29:05.697 "data_size": 7936 00:29:05.697 }, 00:29:05.697 { 00:29:05.697 "name": "BaseBdev2", 00:29:05.697 "uuid": "f73357c1-310d-4027-a932-e13d85b922e7", 00:29:05.697 "is_configured": true, 00:29:05.697 "data_offset": 256, 00:29:05.697 "data_size": 7936 00:29:05.697 } 00:29:05.697 ] 00:29:05.697 }' 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.697 19:13:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:06.265 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:29:06.265 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:29:06.265 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:06.265 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:06.265 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:06.265 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:29:06.524 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:06.524 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:06.524 [2024-06-10 19:13:21.226274] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:06.524 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:06.524 "name": "Existed_Raid", 00:29:06.524 "aliases": [ 00:29:06.524 "2b8ef053-706d-411a-bf91-d8da00fd650c" 00:29:06.524 ], 00:29:06.524 "product_name": "Raid Volume", 00:29:06.524 "block_size": 4128, 00:29:06.524 "num_blocks": 7936, 00:29:06.524 "uuid": "2b8ef053-706d-411a-bf91-d8da00fd650c", 00:29:06.524 "md_size": 32, 00:29:06.524 "md_interleave": true, 00:29:06.524 "dif_type": 0, 00:29:06.524 "assigned_rate_limits": { 00:29:06.524 "rw_ios_per_sec": 0, 00:29:06.524 "rw_mbytes_per_sec": 0, 00:29:06.524 "r_mbytes_per_sec": 0, 00:29:06.524 "w_mbytes_per_sec": 0 00:29:06.524 }, 00:29:06.524 "claimed": false, 00:29:06.524 "zoned": false, 00:29:06.524 "supported_io_types": { 00:29:06.524 "read": true, 00:29:06.524 "write": true, 00:29:06.524 "unmap": false, 00:29:06.524 "write_zeroes": true, 00:29:06.524 "flush": false, 00:29:06.524 "reset": true, 00:29:06.524 "compare": false, 00:29:06.524 "compare_and_write": false, 00:29:06.524 "abort": false, 00:29:06.524 "nvme_admin": false, 00:29:06.524 "nvme_io": false 00:29:06.524 }, 00:29:06.524 "memory_domains": [ 00:29:06.524 { 00:29:06.524 "dma_device_id": "system", 00:29:06.524 "dma_device_type": 1 00:29:06.524 }, 00:29:06.524 { 00:29:06.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.524 "dma_device_type": 2 00:29:06.524 }, 00:29:06.524 { 00:29:06.524 "dma_device_id": "system", 00:29:06.524 "dma_device_type": 1 00:29:06.524 }, 00:29:06.524 { 00:29:06.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.524 "dma_device_type": 2 00:29:06.524 } 00:29:06.524 ], 00:29:06.524 "driver_specific": { 00:29:06.524 "raid": { 00:29:06.524 "uuid": "2b8ef053-706d-411a-bf91-d8da00fd650c", 00:29:06.524 "strip_size_kb": 0, 00:29:06.524 "state": "online", 00:29:06.524 "raid_level": "raid1", 00:29:06.524 "superblock": true, 00:29:06.524 "num_base_bdevs": 2, 00:29:06.524 "num_base_bdevs_discovered": 2, 00:29:06.524 "num_base_bdevs_operational": 2, 00:29:06.524 "base_bdevs_list": [ 00:29:06.524 { 00:29:06.524 "name": "BaseBdev1", 00:29:06.524 "uuid": "7e034a1d-703a-4f2b-92cf-8f846e160f8e", 00:29:06.524 "is_configured": true, 00:29:06.524 "data_offset": 256, 00:29:06.524 "data_size": 7936 00:29:06.524 }, 00:29:06.524 { 00:29:06.524 "name": "BaseBdev2", 00:29:06.524 "uuid": "f73357c1-310d-4027-a932-e13d85b922e7", 00:29:06.524 "is_configured": true, 00:29:06.524 "data_offset": 256, 00:29:06.524 "data_size": 7936 00:29:06.524 } 00:29:06.524 ] 00:29:06.524 } 00:29:06.524 } 00:29:06.524 }' 00:29:06.524 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:06.783 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:29:06.783 BaseBdev2' 00:29:06.783 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:06.783 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:29:06.783 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:06.783 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:06.783 "name": "BaseBdev1", 00:29:06.783 "aliases": [ 00:29:06.783 "7e034a1d-703a-4f2b-92cf-8f846e160f8e" 00:29:06.783 ], 00:29:06.783 "product_name": "Malloc disk", 00:29:06.783 "block_size": 4128, 00:29:06.783 "num_blocks": 8192, 00:29:06.783 "uuid": "7e034a1d-703a-4f2b-92cf-8f846e160f8e", 00:29:06.783 "md_size": 32, 00:29:06.783 "md_interleave": true, 00:29:06.783 "dif_type": 0, 00:29:06.783 "assigned_rate_limits": { 00:29:06.783 "rw_ios_per_sec": 0, 00:29:06.783 "rw_mbytes_per_sec": 0, 00:29:06.783 "r_mbytes_per_sec": 0, 00:29:06.783 "w_mbytes_per_sec": 0 00:29:06.783 }, 00:29:06.783 "claimed": true, 00:29:06.783 "claim_type": "exclusive_write", 00:29:06.783 "zoned": false, 00:29:06.783 "supported_io_types": { 00:29:06.783 "read": true, 00:29:06.783 "write": true, 00:29:06.783 "unmap": true, 00:29:06.783 "write_zeroes": true, 00:29:06.783 "flush": true, 00:29:06.783 "reset": true, 00:29:06.783 "compare": false, 00:29:06.783 "compare_and_write": false, 00:29:06.783 "abort": true, 00:29:06.783 "nvme_admin": false, 00:29:06.783 "nvme_io": false 00:29:06.783 }, 00:29:06.783 "memory_domains": [ 00:29:06.783 { 00:29:06.783 "dma_device_id": "system", 00:29:06.783 "dma_device_type": 1 00:29:06.783 }, 00:29:06.783 { 00:29:06.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.783 "dma_device_type": 2 00:29:06.783 } 00:29:06.783 ], 00:29:06.783 "driver_specific": {} 00:29:06.783 }' 00:29:06.783 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:07.043 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.301 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.301 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:07.301 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.301 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:07.301 19:13:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:07.571 "name": "BaseBdev2", 00:29:07.571 "aliases": [ 00:29:07.571 "f73357c1-310d-4027-a932-e13d85b922e7" 00:29:07.571 ], 00:29:07.571 "product_name": "Malloc disk", 00:29:07.571 "block_size": 4128, 00:29:07.571 "num_blocks": 8192, 00:29:07.571 "uuid": "f73357c1-310d-4027-a932-e13d85b922e7", 00:29:07.571 "md_size": 32, 00:29:07.571 "md_interleave": true, 00:29:07.571 "dif_type": 0, 00:29:07.571 "assigned_rate_limits": { 00:29:07.571 "rw_ios_per_sec": 0, 00:29:07.571 "rw_mbytes_per_sec": 0, 00:29:07.571 "r_mbytes_per_sec": 0, 00:29:07.571 "w_mbytes_per_sec": 0 00:29:07.571 }, 00:29:07.571 "claimed": true, 00:29:07.571 "claim_type": "exclusive_write", 00:29:07.571 "zoned": false, 00:29:07.571 "supported_io_types": { 00:29:07.571 "read": true, 00:29:07.571 "write": true, 00:29:07.571 "unmap": true, 00:29:07.571 "write_zeroes": true, 00:29:07.571 "flush": true, 00:29:07.571 "reset": true, 00:29:07.571 "compare": false, 00:29:07.571 "compare_and_write": false, 00:29:07.571 "abort": true, 00:29:07.571 "nvme_admin": false, 00:29:07.571 "nvme_io": false 00:29:07.571 }, 00:29:07.571 "memory_domains": [ 00:29:07.571 { 00:29:07.571 "dma_device_id": "system", 00:29:07.571 "dma_device_type": 1 00:29:07.571 }, 00:29:07.571 { 00:29:07.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.571 "dma_device_type": 2 00:29:07.571 } 00:29:07.571 ], 00:29:07.571 "driver_specific": {} 00:29:07.571 }' 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:07.571 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.830 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.830 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:07.830 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:08.089 [2024-06-10 19:13:22.601729] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.089 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:08.348 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:08.348 "name": "Existed_Raid", 00:29:08.348 "uuid": "2b8ef053-706d-411a-bf91-d8da00fd650c", 00:29:08.348 "strip_size_kb": 0, 00:29:08.348 "state": "online", 00:29:08.348 "raid_level": "raid1", 00:29:08.348 "superblock": true, 00:29:08.348 "num_base_bdevs": 2, 00:29:08.348 "num_base_bdevs_discovered": 1, 00:29:08.348 "num_base_bdevs_operational": 1, 00:29:08.348 "base_bdevs_list": [ 00:29:08.348 { 00:29:08.348 "name": null, 00:29:08.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.348 "is_configured": false, 00:29:08.348 "data_offset": 256, 00:29:08.348 "data_size": 7936 00:29:08.348 }, 00:29:08.348 { 00:29:08.348 "name": "BaseBdev2", 00:29:08.348 "uuid": "f73357c1-310d-4027-a932-e13d85b922e7", 00:29:08.348 "is_configured": true, 00:29:08.348 "data_offset": 256, 00:29:08.348 "data_size": 7936 00:29:08.348 } 00:29:08.348 ] 00:29:08.348 }' 00:29:08.348 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:08.348 19:13:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:08.917 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:09.177 [2024-06-10 19:13:23.854170] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:09.177 [2024-06-10 19:13:23.854242] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:09.177 [2024-06-10 19:13:23.864887] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:09.177 [2024-06-10 19:13:23.864920] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:09.177 [2024-06-10 19:13:23.864931] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14f16d0 name Existed_Raid, state offline 00:29:09.177 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:29:09.177 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:29:09.177 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.177 19:13:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 1804895 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 1804895 ']' 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 1804895 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1804895 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1804895' 00:29:09.437 killing process with pid 1804895 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # kill 1804895 00:29:09.437 [2024-06-10 19:13:24.170053] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:09.437 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # wait 1804895 00:29:09.437 [2024-06-10 19:13:24.170896] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:09.696 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:29:09.696 00:29:09.696 real 0m9.931s 00:29:09.696 user 0m17.596s 00:29:09.696 sys 0m1.915s 00:29:09.696 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:09.696 19:13:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:09.696 ************************************ 00:29:09.696 END TEST raid_state_function_test_sb_md_interleaved 00:29:09.696 ************************************ 00:29:09.696 19:13:24 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:29:09.696 19:13:24 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:29:09.696 19:13:24 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:09.696 19:13:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:09.696 ************************************ 00:29:09.696 START TEST raid_superblock_test_md_interleaved 00:29:09.696 ************************************ 00:29:09.696 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:29:09.696 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:29:09.696 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:29:09.696 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=1806894 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 1806894 /var/tmp/spdk-raid.sock 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:09.697 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 1806894 ']' 00:29:09.956 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:09.956 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:09.956 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:09.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:09.956 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:09.956 19:13:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:09.956 [2024-06-10 19:13:24.509289] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:29:09.956 [2024-06-10 19:13:24.509345] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806894 ] 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.0 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.1 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.2 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.3 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.4 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.5 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.6 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:01.7 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.0 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.1 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.2 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.3 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.4 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.5 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.6 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b6:02.7 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.956 EAL: Requested device 0000:b8:01.0 cannot be used 00:29:09.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.1 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.2 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.3 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.4 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.5 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.6 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:01.7 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.0 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.1 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.2 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.3 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.4 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.5 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.6 cannot be used 00:29:09.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.957 EAL: Requested device 0000:b8:02.7 cannot be used 00:29:09.957 [2024-06-10 19:13:24.643109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.216 [2024-06-10 19:13:24.730248] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.216 [2024-06-10 19:13:24.790787] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:10.216 [2024-06-10 19:13:24.790829] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:10.785 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:29:11.044 malloc1 00:29:11.044 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:11.303 [2024-06-10 19:13:25.844641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:11.303 [2024-06-10 19:13:25.844684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.303 [2024-06-10 19:13:25.844702] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f9ae0 00:29:11.303 [2024-06-10 19:13:25.844713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.303 [2024-06-10 19:13:25.846058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.303 [2024-06-10 19:13:25.846084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:11.303 pt1 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:11.303 19:13:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:29:11.562 malloc2 00:29:11.563 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:11.563 [2024-06-10 19:13:26.298525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:11.563 [2024-06-10 19:13:26.298565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.563 [2024-06-10 19:13:26.298590] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12deb70 00:29:11.563 [2024-06-10 19:13:26.298601] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.563 [2024-06-10 19:13:26.299851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.563 [2024-06-10 19:13:26.299878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:11.563 pt2 00:29:11.563 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:11.563 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:11.563 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:29:11.821 [2024-06-10 19:13:26.523127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:11.821 [2024-06-10 19:13:26.524282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:11.821 [2024-06-10 19:13:26.524417] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12e0520 00:29:11.821 [2024-06-10 19:13:26.524429] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:11.821 [2024-06-10 19:13:26.524493] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x115c650 00:29:11.821 [2024-06-10 19:13:26.524567] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12e0520 00:29:11.821 [2024-06-10 19:13:26.524585] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12e0520 00:29:11.822 [2024-06-10 19:13:26.524639] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.822 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.081 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.081 "name": "raid_bdev1", 00:29:12.081 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:12.081 "strip_size_kb": 0, 00:29:12.081 "state": "online", 00:29:12.081 "raid_level": "raid1", 00:29:12.081 "superblock": true, 00:29:12.081 "num_base_bdevs": 2, 00:29:12.081 "num_base_bdevs_discovered": 2, 00:29:12.081 "num_base_bdevs_operational": 2, 00:29:12.081 "base_bdevs_list": [ 00:29:12.081 { 00:29:12.081 "name": "pt1", 00:29:12.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.081 "is_configured": true, 00:29:12.081 "data_offset": 256, 00:29:12.081 "data_size": 7936 00:29:12.081 }, 00:29:12.081 { 00:29:12.081 "name": "pt2", 00:29:12.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:12.081 "is_configured": true, 00:29:12.081 "data_offset": 256, 00:29:12.081 "data_size": 7936 00:29:12.081 } 00:29:12.081 ] 00:29:12.081 }' 00:29:12.081 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.081 19:13:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:12.649 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:12.908 [2024-06-10 19:13:27.534019] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:12.908 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:12.908 "name": "raid_bdev1", 00:29:12.908 "aliases": [ 00:29:12.908 "c45304ff-22fa-4d95-863a-ef53b27ab6f8" 00:29:12.908 ], 00:29:12.908 "product_name": "Raid Volume", 00:29:12.908 "block_size": 4128, 00:29:12.908 "num_blocks": 7936, 00:29:12.909 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:12.909 "md_size": 32, 00:29:12.909 "md_interleave": true, 00:29:12.909 "dif_type": 0, 00:29:12.909 "assigned_rate_limits": { 00:29:12.909 "rw_ios_per_sec": 0, 00:29:12.909 "rw_mbytes_per_sec": 0, 00:29:12.909 "r_mbytes_per_sec": 0, 00:29:12.909 "w_mbytes_per_sec": 0 00:29:12.909 }, 00:29:12.909 "claimed": false, 00:29:12.909 "zoned": false, 00:29:12.909 "supported_io_types": { 00:29:12.909 "read": true, 00:29:12.909 "write": true, 00:29:12.909 "unmap": false, 00:29:12.909 "write_zeroes": true, 00:29:12.909 "flush": false, 00:29:12.909 "reset": true, 00:29:12.909 "compare": false, 00:29:12.909 "compare_and_write": false, 00:29:12.909 "abort": false, 00:29:12.909 "nvme_admin": false, 00:29:12.909 "nvme_io": false 00:29:12.909 }, 00:29:12.909 "memory_domains": [ 00:29:12.909 { 00:29:12.909 "dma_device_id": "system", 00:29:12.909 "dma_device_type": 1 00:29:12.909 }, 00:29:12.909 { 00:29:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.909 "dma_device_type": 2 00:29:12.909 }, 00:29:12.909 { 00:29:12.909 "dma_device_id": "system", 00:29:12.909 "dma_device_type": 1 00:29:12.909 }, 00:29:12.909 { 00:29:12.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.909 "dma_device_type": 2 00:29:12.909 } 00:29:12.909 ], 00:29:12.909 "driver_specific": { 00:29:12.909 "raid": { 00:29:12.909 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:12.909 "strip_size_kb": 0, 00:29:12.909 "state": "online", 00:29:12.909 "raid_level": "raid1", 00:29:12.909 "superblock": true, 00:29:12.909 "num_base_bdevs": 2, 00:29:12.909 "num_base_bdevs_discovered": 2, 00:29:12.909 "num_base_bdevs_operational": 2, 00:29:12.909 "base_bdevs_list": [ 00:29:12.909 { 00:29:12.909 "name": "pt1", 00:29:12.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.909 "is_configured": true, 00:29:12.909 "data_offset": 256, 00:29:12.909 "data_size": 7936 00:29:12.909 }, 00:29:12.909 { 00:29:12.909 "name": "pt2", 00:29:12.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:12.909 "is_configured": true, 00:29:12.909 "data_offset": 256, 00:29:12.909 "data_size": 7936 00:29:12.909 } 00:29:12.909 ] 00:29:12.909 } 00:29:12.909 } 00:29:12.909 }' 00:29:12.909 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:12.909 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:12.909 pt2' 00:29:12.909 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:12.909 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:12.909 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:13.167 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:13.167 "name": "pt1", 00:29:13.167 "aliases": [ 00:29:13.167 "00000000-0000-0000-0000-000000000001" 00:29:13.167 ], 00:29:13.167 "product_name": "passthru", 00:29:13.167 "block_size": 4128, 00:29:13.167 "num_blocks": 8192, 00:29:13.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:13.167 "md_size": 32, 00:29:13.167 "md_interleave": true, 00:29:13.167 "dif_type": 0, 00:29:13.167 "assigned_rate_limits": { 00:29:13.167 "rw_ios_per_sec": 0, 00:29:13.167 "rw_mbytes_per_sec": 0, 00:29:13.167 "r_mbytes_per_sec": 0, 00:29:13.167 "w_mbytes_per_sec": 0 00:29:13.167 }, 00:29:13.167 "claimed": true, 00:29:13.167 "claim_type": "exclusive_write", 00:29:13.167 "zoned": false, 00:29:13.167 "supported_io_types": { 00:29:13.167 "read": true, 00:29:13.167 "write": true, 00:29:13.167 "unmap": true, 00:29:13.167 "write_zeroes": true, 00:29:13.167 "flush": true, 00:29:13.167 "reset": true, 00:29:13.167 "compare": false, 00:29:13.167 "compare_and_write": false, 00:29:13.167 "abort": true, 00:29:13.167 "nvme_admin": false, 00:29:13.167 "nvme_io": false 00:29:13.167 }, 00:29:13.167 "memory_domains": [ 00:29:13.167 { 00:29:13.167 "dma_device_id": "system", 00:29:13.167 "dma_device_type": 1 00:29:13.167 }, 00:29:13.167 { 00:29:13.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.167 "dma_device_type": 2 00:29:13.167 } 00:29:13.167 ], 00:29:13.167 "driver_specific": { 00:29:13.167 "passthru": { 00:29:13.167 "name": "pt1", 00:29:13.167 "base_bdev_name": "malloc1" 00:29:13.167 } 00:29:13.167 } 00:29:13.167 }' 00:29:13.167 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.167 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.167 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:13.167 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.426 19:13:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.426 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:13.426 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.426 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.426 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:13.426 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.427 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.427 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:13.427 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:13.427 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:13.427 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:13.686 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:13.686 "name": "pt2", 00:29:13.686 "aliases": [ 00:29:13.686 "00000000-0000-0000-0000-000000000002" 00:29:13.686 ], 00:29:13.686 "product_name": "passthru", 00:29:13.686 "block_size": 4128, 00:29:13.686 "num_blocks": 8192, 00:29:13.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:13.686 "md_size": 32, 00:29:13.686 "md_interleave": true, 00:29:13.686 "dif_type": 0, 00:29:13.686 "assigned_rate_limits": { 00:29:13.686 "rw_ios_per_sec": 0, 00:29:13.686 "rw_mbytes_per_sec": 0, 00:29:13.686 "r_mbytes_per_sec": 0, 00:29:13.686 "w_mbytes_per_sec": 0 00:29:13.686 }, 00:29:13.686 "claimed": true, 00:29:13.686 "claim_type": "exclusive_write", 00:29:13.686 "zoned": false, 00:29:13.686 "supported_io_types": { 00:29:13.686 "read": true, 00:29:13.686 "write": true, 00:29:13.686 "unmap": true, 00:29:13.686 "write_zeroes": true, 00:29:13.686 "flush": true, 00:29:13.686 "reset": true, 00:29:13.686 "compare": false, 00:29:13.686 "compare_and_write": false, 00:29:13.686 "abort": true, 00:29:13.686 "nvme_admin": false, 00:29:13.686 "nvme_io": false 00:29:13.686 }, 00:29:13.686 "memory_domains": [ 00:29:13.686 { 00:29:13.686 "dma_device_id": "system", 00:29:13.686 "dma_device_type": 1 00:29:13.686 }, 00:29:13.686 { 00:29:13.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.686 "dma_device_type": 2 00:29:13.686 } 00:29:13.686 ], 00:29:13.686 "driver_specific": { 00:29:13.686 "passthru": { 00:29:13.686 "name": "pt2", 00:29:13.686 "base_bdev_name": "malloc2" 00:29:13.686 } 00:29:13.686 } 00:29:13.686 }' 00:29:13.686 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.945 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:14.204 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:14.204 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:14.204 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:29:14.204 [2024-06-10 19:13:28.949743] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:14.464 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c45304ff-22fa-4d95-863a-ef53b27ab6f8 00:29:14.464 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z c45304ff-22fa-4d95-863a-ef53b27ab6f8 ']' 00:29:14.464 19:13:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:14.464 [2024-06-10 19:13:29.174120] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:14.464 [2024-06-10 19:13:29.174138] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:14.464 [2024-06-10 19:13:29.174186] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:14.464 [2024-06-10 19:13:29.174231] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:14.464 [2024-06-10 19:13:29.174241] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12e0520 name raid_bdev1, state offline 00:29:14.464 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.464 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:29:14.723 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:29:14.723 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:29:14.723 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.723 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:14.981 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.981 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:15.241 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:15.241 19:13:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # local es=0 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:15.501 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:15.760 [2024-06-10 19:13:30.309058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:15.760 [2024-06-10 19:13:30.310322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:15.760 [2024-06-10 19:13:30.310371] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:15.760 [2024-06-10 19:13:30.310410] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:15.760 [2024-06-10 19:13:30.310427] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:15.760 [2024-06-10 19:13:30.310436] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12ea860 name raid_bdev1, state configuring 00:29:15.760 request: 00:29:15.760 { 00:29:15.760 "name": "raid_bdev1", 00:29:15.760 "raid_level": "raid1", 00:29:15.760 "base_bdevs": [ 00:29:15.760 "malloc1", 00:29:15.760 "malloc2" 00:29:15.760 ], 00:29:15.760 "superblock": false, 00:29:15.760 "method": "bdev_raid_create", 00:29:15.760 "req_id": 1 00:29:15.760 } 00:29:15.760 Got JSON-RPC error response 00:29:15.760 response: 00:29:15.760 { 00:29:15.760 "code": -17, 00:29:15.760 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:15.760 } 00:29:15.760 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # es=1 00:29:15.760 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:15.760 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:15.760 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:15.760 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.760 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:29:16.019 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:29:16.019 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:29:16.019 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:16.278 [2024-06-10 19:13:30.782256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:16.278 [2024-06-10 19:13:30.782296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.278 [2024-06-10 19:13:30.782314] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x115c840 00:29:16.278 [2024-06-10 19:13:30.782326] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.278 [2024-06-10 19:13:30.783602] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.278 [2024-06-10 19:13:30.783628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:16.278 [2024-06-10 19:13:30.783667] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:16.278 [2024-06-10 19:13:30.783689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:16.278 pt1 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.278 19:13:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.278 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.278 "name": "raid_bdev1", 00:29:16.278 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:16.278 "strip_size_kb": 0, 00:29:16.278 "state": "configuring", 00:29:16.278 "raid_level": "raid1", 00:29:16.278 "superblock": true, 00:29:16.278 "num_base_bdevs": 2, 00:29:16.278 "num_base_bdevs_discovered": 1, 00:29:16.278 "num_base_bdevs_operational": 2, 00:29:16.278 "base_bdevs_list": [ 00:29:16.278 { 00:29:16.278 "name": "pt1", 00:29:16.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:16.278 "is_configured": true, 00:29:16.278 "data_offset": 256, 00:29:16.278 "data_size": 7936 00:29:16.278 }, 00:29:16.278 { 00:29:16.278 "name": null, 00:29:16.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:16.278 "is_configured": false, 00:29:16.278 "data_offset": 256, 00:29:16.278 "data_size": 7936 00:29:16.278 } 00:29:16.278 ] 00:29:16.278 }' 00:29:16.278 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.278 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:16.847 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:29:16.847 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:29:16.847 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:16.847 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:17.106 [2024-06-10 19:13:31.800955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:17.106 [2024-06-10 19:13:31.800994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.106 [2024-06-10 19:13:31.801010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1155690 00:29:17.106 [2024-06-10 19:13:31.801021] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.106 [2024-06-10 19:13:31.801162] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.106 [2024-06-10 19:13:31.801176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:17.106 [2024-06-10 19:13:31.801211] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:17.106 [2024-06-10 19:13:31.801227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:17.106 [2024-06-10 19:13:31.801297] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12e1390 00:29:17.106 [2024-06-10 19:13:31.801307] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:17.106 [2024-06-10 19:13:31.801356] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11550b0 00:29:17.106 [2024-06-10 19:13:31.801422] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12e1390 00:29:17.106 [2024-06-10 19:13:31.801431] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12e1390 00:29:17.106 [2024-06-10 19:13:31.801482] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.106 pt2 00:29:17.106 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:17.106 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:17.106 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.107 19:13:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.366 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.366 "name": "raid_bdev1", 00:29:17.366 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:17.366 "strip_size_kb": 0, 00:29:17.366 "state": "online", 00:29:17.366 "raid_level": "raid1", 00:29:17.366 "superblock": true, 00:29:17.366 "num_base_bdevs": 2, 00:29:17.366 "num_base_bdevs_discovered": 2, 00:29:17.366 "num_base_bdevs_operational": 2, 00:29:17.366 "base_bdevs_list": [ 00:29:17.366 { 00:29:17.366 "name": "pt1", 00:29:17.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:17.366 "is_configured": true, 00:29:17.366 "data_offset": 256, 00:29:17.366 "data_size": 7936 00:29:17.366 }, 00:29:17.366 { 00:29:17.366 "name": "pt2", 00:29:17.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:17.366 "is_configured": true, 00:29:17.366 "data_offset": 256, 00:29:17.366 "data_size": 7936 00:29:17.366 } 00:29:17.366 ] 00:29:17.366 }' 00:29:17.366 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.366 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:17.932 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:18.190 [2024-06-10 19:13:32.823852] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:18.190 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:18.190 "name": "raid_bdev1", 00:29:18.190 "aliases": [ 00:29:18.190 "c45304ff-22fa-4d95-863a-ef53b27ab6f8" 00:29:18.190 ], 00:29:18.190 "product_name": "Raid Volume", 00:29:18.190 "block_size": 4128, 00:29:18.190 "num_blocks": 7936, 00:29:18.190 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:18.190 "md_size": 32, 00:29:18.190 "md_interleave": true, 00:29:18.190 "dif_type": 0, 00:29:18.190 "assigned_rate_limits": { 00:29:18.190 "rw_ios_per_sec": 0, 00:29:18.190 "rw_mbytes_per_sec": 0, 00:29:18.190 "r_mbytes_per_sec": 0, 00:29:18.190 "w_mbytes_per_sec": 0 00:29:18.190 }, 00:29:18.190 "claimed": false, 00:29:18.190 "zoned": false, 00:29:18.190 "supported_io_types": { 00:29:18.190 "read": true, 00:29:18.190 "write": true, 00:29:18.190 "unmap": false, 00:29:18.190 "write_zeroes": true, 00:29:18.190 "flush": false, 00:29:18.190 "reset": true, 00:29:18.190 "compare": false, 00:29:18.190 "compare_and_write": false, 00:29:18.190 "abort": false, 00:29:18.190 "nvme_admin": false, 00:29:18.190 "nvme_io": false 00:29:18.190 }, 00:29:18.190 "memory_domains": [ 00:29:18.190 { 00:29:18.190 "dma_device_id": "system", 00:29:18.190 "dma_device_type": 1 00:29:18.190 }, 00:29:18.190 { 00:29:18.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:18.190 "dma_device_type": 2 00:29:18.190 }, 00:29:18.190 { 00:29:18.190 "dma_device_id": "system", 00:29:18.190 "dma_device_type": 1 00:29:18.190 }, 00:29:18.190 { 00:29:18.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:18.190 "dma_device_type": 2 00:29:18.190 } 00:29:18.190 ], 00:29:18.190 "driver_specific": { 00:29:18.190 "raid": { 00:29:18.190 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:18.190 "strip_size_kb": 0, 00:29:18.190 "state": "online", 00:29:18.190 "raid_level": "raid1", 00:29:18.190 "superblock": true, 00:29:18.190 "num_base_bdevs": 2, 00:29:18.190 "num_base_bdevs_discovered": 2, 00:29:18.190 "num_base_bdevs_operational": 2, 00:29:18.190 "base_bdevs_list": [ 00:29:18.190 { 00:29:18.190 "name": "pt1", 00:29:18.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:18.190 "is_configured": true, 00:29:18.190 "data_offset": 256, 00:29:18.190 "data_size": 7936 00:29:18.190 }, 00:29:18.190 { 00:29:18.190 "name": "pt2", 00:29:18.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:18.190 "is_configured": true, 00:29:18.190 "data_offset": 256, 00:29:18.190 "data_size": 7936 00:29:18.190 } 00:29:18.190 ] 00:29:18.190 } 00:29:18.190 } 00:29:18.190 }' 00:29:18.190 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:18.191 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:18.191 pt2' 00:29:18.191 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:18.191 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:18.191 19:13:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:18.450 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:18.450 "name": "pt1", 00:29:18.450 "aliases": [ 00:29:18.450 "00000000-0000-0000-0000-000000000001" 00:29:18.450 ], 00:29:18.450 "product_name": "passthru", 00:29:18.450 "block_size": 4128, 00:29:18.450 "num_blocks": 8192, 00:29:18.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:18.450 "md_size": 32, 00:29:18.450 "md_interleave": true, 00:29:18.450 "dif_type": 0, 00:29:18.450 "assigned_rate_limits": { 00:29:18.450 "rw_ios_per_sec": 0, 00:29:18.450 "rw_mbytes_per_sec": 0, 00:29:18.450 "r_mbytes_per_sec": 0, 00:29:18.450 "w_mbytes_per_sec": 0 00:29:18.450 }, 00:29:18.450 "claimed": true, 00:29:18.450 "claim_type": "exclusive_write", 00:29:18.450 "zoned": false, 00:29:18.450 "supported_io_types": { 00:29:18.450 "read": true, 00:29:18.450 "write": true, 00:29:18.450 "unmap": true, 00:29:18.450 "write_zeroes": true, 00:29:18.450 "flush": true, 00:29:18.450 "reset": true, 00:29:18.450 "compare": false, 00:29:18.450 "compare_and_write": false, 00:29:18.450 "abort": true, 00:29:18.450 "nvme_admin": false, 00:29:18.450 "nvme_io": false 00:29:18.450 }, 00:29:18.450 "memory_domains": [ 00:29:18.450 { 00:29:18.450 "dma_device_id": "system", 00:29:18.450 "dma_device_type": 1 00:29:18.450 }, 00:29:18.450 { 00:29:18.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:18.450 "dma_device_type": 2 00:29:18.450 } 00:29:18.450 ], 00:29:18.450 "driver_specific": { 00:29:18.450 "passthru": { 00:29:18.450 "name": "pt1", 00:29:18.450 "base_bdev_name": "malloc1" 00:29:18.450 } 00:29:18.450 } 00:29:18.450 }' 00:29:18.450 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:18.450 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:18.450 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:18.450 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:18.709 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:18.969 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:18.969 "name": "pt2", 00:29:18.969 "aliases": [ 00:29:18.969 "00000000-0000-0000-0000-000000000002" 00:29:18.969 ], 00:29:18.969 "product_name": "passthru", 00:29:18.969 "block_size": 4128, 00:29:18.969 "num_blocks": 8192, 00:29:18.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:18.969 "md_size": 32, 00:29:18.969 "md_interleave": true, 00:29:18.969 "dif_type": 0, 00:29:18.969 "assigned_rate_limits": { 00:29:18.969 "rw_ios_per_sec": 0, 00:29:18.969 "rw_mbytes_per_sec": 0, 00:29:18.969 "r_mbytes_per_sec": 0, 00:29:18.969 "w_mbytes_per_sec": 0 00:29:18.969 }, 00:29:18.969 "claimed": true, 00:29:18.969 "claim_type": "exclusive_write", 00:29:18.969 "zoned": false, 00:29:18.969 "supported_io_types": { 00:29:18.969 "read": true, 00:29:18.969 "write": true, 00:29:18.969 "unmap": true, 00:29:18.969 "write_zeroes": true, 00:29:18.969 "flush": true, 00:29:18.969 "reset": true, 00:29:18.969 "compare": false, 00:29:18.969 "compare_and_write": false, 00:29:18.969 "abort": true, 00:29:18.969 "nvme_admin": false, 00:29:18.969 "nvme_io": false 00:29:18.969 }, 00:29:18.969 "memory_domains": [ 00:29:18.969 { 00:29:18.969 "dma_device_id": "system", 00:29:18.969 "dma_device_type": 1 00:29:18.969 }, 00:29:18.969 { 00:29:18.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:18.969 "dma_device_type": 2 00:29:18.969 } 00:29:18.969 ], 00:29:18.969 "driver_specific": { 00:29:18.969 "passthru": { 00:29:18.969 "name": "pt2", 00:29:18.969 "base_bdev_name": "malloc2" 00:29:18.969 } 00:29:18.969 } 00:29:18.969 }' 00:29:18.969 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:18.969 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:19.229 19:13:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:29:19.488 [2024-06-10 19:13:34.219666] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' c45304ff-22fa-4d95-863a-ef53b27ab6f8 '!=' c45304ff-22fa-4d95-863a-ef53b27ab6f8 ']' 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:29:19.488 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:19.748 [2024-06-10 19:13:34.444111] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.748 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.007 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:20.007 "name": "raid_bdev1", 00:29:20.007 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:20.007 "strip_size_kb": 0, 00:29:20.007 "state": "online", 00:29:20.007 "raid_level": "raid1", 00:29:20.007 "superblock": true, 00:29:20.007 "num_base_bdevs": 2, 00:29:20.007 "num_base_bdevs_discovered": 1, 00:29:20.007 "num_base_bdevs_operational": 1, 00:29:20.007 "base_bdevs_list": [ 00:29:20.007 { 00:29:20.007 "name": null, 00:29:20.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.007 "is_configured": false, 00:29:20.007 "data_offset": 256, 00:29:20.007 "data_size": 7936 00:29:20.007 }, 00:29:20.007 { 00:29:20.007 "name": "pt2", 00:29:20.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:20.007 "is_configured": true, 00:29:20.007 "data_offset": 256, 00:29:20.007 "data_size": 7936 00:29:20.007 } 00:29:20.007 ] 00:29:20.007 }' 00:29:20.007 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:20.007 19:13:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:20.575 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:20.834 [2024-06-10 19:13:35.474808] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:20.834 [2024-06-10 19:13:35.474832] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:20.834 [2024-06-10 19:13:35.474878] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:20.834 [2024-06-10 19:13:35.474914] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:20.834 [2024-06-10 19:13:35.474925] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12e1390 name raid_bdev1, state offline 00:29:20.834 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.834 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:29:21.094 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:29:21.094 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:29:21.094 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:29:21.094 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:21.094 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:21.353 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:21.353 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:21.353 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:29:21.353 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:21.353 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:29:21.353 19:13:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:21.613 [2024-06-10 19:13:36.140532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:21.613 [2024-06-10 19:13:36.140582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:21.613 [2024-06-10 19:13:36.140601] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12e1be0 00:29:21.613 [2024-06-10 19:13:36.140618] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:21.613 [2024-06-10 19:13:36.141946] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:21.613 [2024-06-10 19:13:36.141970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:21.613 [2024-06-10 19:13:36.142011] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:21.613 [2024-06-10 19:13:36.142034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:21.613 [2024-06-10 19:13:36.142095] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12df2e0 00:29:21.613 [2024-06-10 19:13:36.142105] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:21.613 [2024-06-10 19:13:36.142158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12eab40 00:29:21.613 [2024-06-10 19:13:36.142224] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12df2e0 00:29:21.613 [2024-06-10 19:13:36.142232] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12df2e0 00:29:21.613 [2024-06-10 19:13:36.142283] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.613 pt2 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.613 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.902 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.903 "name": "raid_bdev1", 00:29:21.903 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:21.903 "strip_size_kb": 0, 00:29:21.903 "state": "online", 00:29:21.903 "raid_level": "raid1", 00:29:21.903 "superblock": true, 00:29:21.903 "num_base_bdevs": 2, 00:29:21.903 "num_base_bdevs_discovered": 1, 00:29:21.903 "num_base_bdevs_operational": 1, 00:29:21.903 "base_bdevs_list": [ 00:29:21.903 { 00:29:21.903 "name": null, 00:29:21.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.903 "is_configured": false, 00:29:21.903 "data_offset": 256, 00:29:21.903 "data_size": 7936 00:29:21.903 }, 00:29:21.903 { 00:29:21.903 "name": "pt2", 00:29:21.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:21.903 "is_configured": true, 00:29:21.903 "data_offset": 256, 00:29:21.903 "data_size": 7936 00:29:21.903 } 00:29:21.903 ] 00:29:21.903 }' 00:29:21.903 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.903 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:22.472 19:13:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:22.472 [2024-06-10 19:13:37.159207] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:22.472 [2024-06-10 19:13:37.159232] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:22.472 [2024-06-10 19:13:37.159282] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:22.472 [2024-06-10 19:13:37.159319] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:22.472 [2024-06-10 19:13:37.159329] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12df2e0 name raid_bdev1, state offline 00:29:22.472 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.472 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:29:22.731 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:29:22.731 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:29:22.731 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:29:22.731 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:22.990 [2024-06-10 19:13:37.620404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:22.990 [2024-06-10 19:13:37.620449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.990 [2024-06-10 19:13:37.620465] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12df560 00:29:22.990 [2024-06-10 19:13:37.620477] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.990 [2024-06-10 19:13:37.621806] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.990 [2024-06-10 19:13:37.621831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:22.990 [2024-06-10 19:13:37.621872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:22.990 [2024-06-10 19:13:37.621894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:22.990 [2024-06-10 19:13:37.621967] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:22.990 [2024-06-10 19:13:37.621979] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:22.990 [2024-06-10 19:13:37.621993] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1154980 name raid_bdev1, state configuring 00:29:22.990 [2024-06-10 19:13:37.622014] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:22.990 [2024-06-10 19:13:37.622061] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1157000 00:29:22.990 [2024-06-10 19:13:37.622071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:22.990 [2024-06-10 19:13:37.622121] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1154bf0 00:29:22.990 [2024-06-10 19:13:37.622184] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1157000 00:29:22.990 [2024-06-10 19:13:37.622192] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1157000 00:29:22.990 [2024-06-10 19:13:37.622245] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.990 pt1 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:22.990 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.991 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.250 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.250 "name": "raid_bdev1", 00:29:23.250 "uuid": "c45304ff-22fa-4d95-863a-ef53b27ab6f8", 00:29:23.250 "strip_size_kb": 0, 00:29:23.250 "state": "online", 00:29:23.250 "raid_level": "raid1", 00:29:23.250 "superblock": true, 00:29:23.250 "num_base_bdevs": 2, 00:29:23.250 "num_base_bdevs_discovered": 1, 00:29:23.250 "num_base_bdevs_operational": 1, 00:29:23.250 "base_bdevs_list": [ 00:29:23.250 { 00:29:23.250 "name": null, 00:29:23.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.250 "is_configured": false, 00:29:23.250 "data_offset": 256, 00:29:23.250 "data_size": 7936 00:29:23.250 }, 00:29:23.250 { 00:29:23.250 "name": "pt2", 00:29:23.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:23.250 "is_configured": true, 00:29:23.250 "data_offset": 256, 00:29:23.250 "data_size": 7936 00:29:23.250 } 00:29:23.250 ] 00:29:23.250 }' 00:29:23.250 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.250 19:13:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:23.818 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:23.818 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:24.078 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:29:24.078 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:24.078 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:29:24.338 [2024-06-10 19:13:38.887914] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' c45304ff-22fa-4d95-863a-ef53b27ab6f8 '!=' c45304ff-22fa-4d95-863a-ef53b27ab6f8 ']' 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 1806894 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 1806894 ']' 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 1806894 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1806894 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1806894' 00:29:24.338 killing process with pid 1806894 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # kill 1806894 00:29:24.338 [2024-06-10 19:13:38.961842] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.338 19:13:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # wait 1806894 00:29:24.338 [2024-06-10 19:13:38.961894] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:24.338 [2024-06-10 19:13:38.961930] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:24.338 [2024-06-10 19:13:38.961941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1157000 name raid_bdev1, state offline 00:29:24.338 [2024-06-10 19:13:38.978207] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:24.597 19:13:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:29:24.597 00:29:24.597 real 0m14.720s 00:29:24.597 user 0m26.575s 00:29:24.597 sys 0m2.811s 00:29:24.597 19:13:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:24.597 19:13:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:24.597 ************************************ 00:29:24.597 END TEST raid_superblock_test_md_interleaved 00:29:24.597 ************************************ 00:29:24.597 19:13:39 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:29:24.597 19:13:39 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:24.597 19:13:39 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:24.597 19:13:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:24.597 ************************************ 00:29:24.597 START TEST raid_rebuild_test_sb_md_interleaved 00:29:24.597 ************************************ 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false false 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:24.597 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=1809619 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 1809619 /var/tmp/spdk-raid.sock 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 1809619 ']' 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:24.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:24.598 19:13:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:24.598 [2024-06-10 19:13:39.305871] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:29:24.598 [2024-06-10 19:13:39.305925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809619 ] 00:29:24.598 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:24.598 Zero copy mechanism will not be used. 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.0 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.1 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.2 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.3 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.4 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.5 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.6 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:01.7 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.0 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.1 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.2 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.3 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.4 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.5 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.6 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b6:02.7 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.0 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.1 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.2 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.3 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.4 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.5 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.6 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:01.7 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.0 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.1 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.2 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.3 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.4 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.5 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.6 cannot be used 00:29:24.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:24.858 EAL: Requested device 0000:b8:02.7 cannot be used 00:29:24.858 [2024-06-10 19:13:39.439904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.858 [2024-06-10 19:13:39.532390] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.858 [2024-06-10 19:13:39.586625] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:24.858 [2024-06-10 19:13:39.586650] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:25.796 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:25.796 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:29:25.796 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:25.796 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:29:25.796 BaseBdev1_malloc 00:29:25.796 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:26.054 [2024-06-10 19:13:40.634387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:26.054 [2024-06-10 19:13:40.634431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.054 [2024-06-10 19:13:40.634450] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bba1a0 00:29:26.054 [2024-06-10 19:13:40.634461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.054 [2024-06-10 19:13:40.635826] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.054 [2024-06-10 19:13:40.635850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:26.054 BaseBdev1 00:29:26.054 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:26.054 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:29:26.312 BaseBdev2_malloc 00:29:26.312 19:13:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:26.570 [2024-06-10 19:13:41.084183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:26.570 [2024-06-10 19:13:41.084220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.570 [2024-06-10 19:13:41.084240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1babff0 00:29:26.570 [2024-06-10 19:13:41.084256] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.570 [2024-06-10 19:13:41.085548] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.570 [2024-06-10 19:13:41.085572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:26.570 BaseBdev2 00:29:26.570 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:29:26.570 spare_malloc 00:29:26.829 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:26.829 spare_delay 00:29:26.829 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:27.088 [2024-06-10 19:13:41.694536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:27.088 [2024-06-10 19:13:41.694583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:27.088 [2024-06-10 19:13:41.694601] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b9f060 00:29:27.088 [2024-06-10 19:13:41.694612] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:27.088 [2024-06-10 19:13:41.695851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:27.088 [2024-06-10 19:13:41.695876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:27.088 spare 00:29:27.088 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:27.347 [2024-06-10 19:13:41.907118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:27.347 [2024-06-10 19:13:41.908250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:27.347 [2024-06-10 19:13:41.908412] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a163b0 00:29:27.347 [2024-06-10 19:13:41.908424] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:27.347 [2024-06-10 19:13:41.908486] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a15490 00:29:27.347 [2024-06-10 19:13:41.908559] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a163b0 00:29:27.347 [2024-06-10 19:13:41.908568] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a163b0 00:29:27.347 [2024-06-10 19:13:41.908626] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.347 19:13:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.605 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:27.605 "name": "raid_bdev1", 00:29:27.605 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:27.605 "strip_size_kb": 0, 00:29:27.605 "state": "online", 00:29:27.605 "raid_level": "raid1", 00:29:27.605 "superblock": true, 00:29:27.605 "num_base_bdevs": 2, 00:29:27.605 "num_base_bdevs_discovered": 2, 00:29:27.605 "num_base_bdevs_operational": 2, 00:29:27.605 "base_bdevs_list": [ 00:29:27.605 { 00:29:27.605 "name": "BaseBdev1", 00:29:27.605 "uuid": "12f62ce0-7974-5b5d-8b38-7e327c49bfed", 00:29:27.605 "is_configured": true, 00:29:27.605 "data_offset": 256, 00:29:27.605 "data_size": 7936 00:29:27.605 }, 00:29:27.605 { 00:29:27.605 "name": "BaseBdev2", 00:29:27.605 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:27.605 "is_configured": true, 00:29:27.605 "data_offset": 256, 00:29:27.605 "data_size": 7936 00:29:27.605 } 00:29:27.605 ] 00:29:27.605 }' 00:29:27.605 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:27.605 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:28.173 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:28.173 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:28.173 [2024-06-10 19:13:42.909930] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:28.173 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:29:28.432 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.432 19:13:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:28.432 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:29:28.432 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:28.432 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:29:28.432 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:28.690 [2024-06-10 19:13:43.350877] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.690 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.691 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.691 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.691 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.949 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.949 "name": "raid_bdev1", 00:29:28.949 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:28.949 "strip_size_kb": 0, 00:29:28.949 "state": "online", 00:29:28.949 "raid_level": "raid1", 00:29:28.949 "superblock": true, 00:29:28.949 "num_base_bdevs": 2, 00:29:28.949 "num_base_bdevs_discovered": 1, 00:29:28.949 "num_base_bdevs_operational": 1, 00:29:28.949 "base_bdevs_list": [ 00:29:28.949 { 00:29:28.949 "name": null, 00:29:28.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.949 "is_configured": false, 00:29:28.949 "data_offset": 256, 00:29:28.949 "data_size": 7936 00:29:28.949 }, 00:29:28.949 { 00:29:28.949 "name": "BaseBdev2", 00:29:28.949 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:28.949 "is_configured": true, 00:29:28.949 "data_offset": 256, 00:29:28.949 "data_size": 7936 00:29:28.949 } 00:29:28.949 ] 00:29:28.949 }' 00:29:28.949 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.949 19:13:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:29.516 19:13:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:29.775 [2024-06-10 19:13:44.357590] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:29.775 [2024-06-10 19:13:44.361050] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a15be0 00:29:29.775 [2024-06-10 19:13:44.363093] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:29.775 19:13:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.711 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.970 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.970 "name": "raid_bdev1", 00:29:30.970 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:30.970 "strip_size_kb": 0, 00:29:30.970 "state": "online", 00:29:30.970 "raid_level": "raid1", 00:29:30.970 "superblock": true, 00:29:30.970 "num_base_bdevs": 2, 00:29:30.970 "num_base_bdevs_discovered": 2, 00:29:30.970 "num_base_bdevs_operational": 2, 00:29:30.970 "process": { 00:29:30.970 "type": "rebuild", 00:29:30.970 "target": "spare", 00:29:30.971 "progress": { 00:29:30.971 "blocks": 3072, 00:29:30.971 "percent": 38 00:29:30.971 } 00:29:30.971 }, 00:29:30.971 "base_bdevs_list": [ 00:29:30.971 { 00:29:30.971 "name": "spare", 00:29:30.971 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:30.971 "is_configured": true, 00:29:30.971 "data_offset": 256, 00:29:30.971 "data_size": 7936 00:29:30.971 }, 00:29:30.971 { 00:29:30.971 "name": "BaseBdev2", 00:29:30.971 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:30.971 "is_configured": true, 00:29:30.971 "data_offset": 256, 00:29:30.971 "data_size": 7936 00:29:30.971 } 00:29:30.971 ] 00:29:30.971 }' 00:29:30.971 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:30.971 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:30.971 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:30.971 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:30.971 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:31.230 [2024-06-10 19:13:45.916060] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:31.230 [2024-06-10 19:13:45.974783] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:31.230 [2024-06-10 19:13:45.974825] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:31.230 [2024-06-10 19:13:45.974839] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:31.230 [2024-06-10 19:13:45.974847] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:31.489 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:31.489 19:13:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:31.489 "name": "raid_bdev1", 00:29:31.489 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:31.489 "strip_size_kb": 0, 00:29:31.489 "state": "online", 00:29:31.489 "raid_level": "raid1", 00:29:31.489 "superblock": true, 00:29:31.489 "num_base_bdevs": 2, 00:29:31.489 "num_base_bdevs_discovered": 1, 00:29:31.489 "num_base_bdevs_operational": 1, 00:29:31.489 "base_bdevs_list": [ 00:29:31.489 { 00:29:31.489 "name": null, 00:29:31.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.489 "is_configured": false, 00:29:31.489 "data_offset": 256, 00:29:31.489 "data_size": 7936 00:29:31.489 }, 00:29:31.489 { 00:29:31.489 "name": "BaseBdev2", 00:29:31.489 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:31.489 "is_configured": true, 00:29:31.489 "data_offset": 256, 00:29:31.489 "data_size": 7936 00:29:31.489 } 00:29:31.489 ] 00:29:31.489 }' 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:31.489 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.057 19:13:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.317 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.317 "name": "raid_bdev1", 00:29:32.317 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:32.317 "strip_size_kb": 0, 00:29:32.317 "state": "online", 00:29:32.317 "raid_level": "raid1", 00:29:32.317 "superblock": true, 00:29:32.317 "num_base_bdevs": 2, 00:29:32.317 "num_base_bdevs_discovered": 1, 00:29:32.317 "num_base_bdevs_operational": 1, 00:29:32.317 "base_bdevs_list": [ 00:29:32.317 { 00:29:32.317 "name": null, 00:29:32.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.317 "is_configured": false, 00:29:32.317 "data_offset": 256, 00:29:32.317 "data_size": 7936 00:29:32.317 }, 00:29:32.317 { 00:29:32.317 "name": "BaseBdev2", 00:29:32.317 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:32.317 "is_configured": true, 00:29:32.317 "data_offset": 256, 00:29:32.317 "data_size": 7936 00:29:32.317 } 00:29:32.317 ] 00:29:32.317 }' 00:29:32.317 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.576 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:32.576 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.576 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:32.576 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:32.835 [2024-06-10 19:13:47.333834] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:32.835 [2024-06-10 19:13:47.337311] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a17fb0 00:29:32.835 [2024-06-10 19:13:47.338673] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:32.835 19:13:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.772 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.031 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.031 "name": "raid_bdev1", 00:29:34.031 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:34.031 "strip_size_kb": 0, 00:29:34.031 "state": "online", 00:29:34.031 "raid_level": "raid1", 00:29:34.031 "superblock": true, 00:29:34.031 "num_base_bdevs": 2, 00:29:34.031 "num_base_bdevs_discovered": 2, 00:29:34.031 "num_base_bdevs_operational": 2, 00:29:34.031 "process": { 00:29:34.031 "type": "rebuild", 00:29:34.031 "target": "spare", 00:29:34.031 "progress": { 00:29:34.031 "blocks": 3072, 00:29:34.031 "percent": 38 00:29:34.031 } 00:29:34.031 }, 00:29:34.031 "base_bdevs_list": [ 00:29:34.031 { 00:29:34.031 "name": "spare", 00:29:34.031 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:34.031 "is_configured": true, 00:29:34.031 "data_offset": 256, 00:29:34.031 "data_size": 7936 00:29:34.031 }, 00:29:34.031 { 00:29:34.031 "name": "BaseBdev2", 00:29:34.031 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:34.031 "is_configured": true, 00:29:34.031 "data_offset": 256, 00:29:34.031 "data_size": 7936 00:29:34.031 } 00:29:34.031 ] 00:29:34.031 }' 00:29:34.031 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.031 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:34.032 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1053 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.032 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.291 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.291 "name": "raid_bdev1", 00:29:34.291 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:34.291 "strip_size_kb": 0, 00:29:34.291 "state": "online", 00:29:34.291 "raid_level": "raid1", 00:29:34.291 "superblock": true, 00:29:34.291 "num_base_bdevs": 2, 00:29:34.291 "num_base_bdevs_discovered": 2, 00:29:34.291 "num_base_bdevs_operational": 2, 00:29:34.291 "process": { 00:29:34.291 "type": "rebuild", 00:29:34.291 "target": "spare", 00:29:34.291 "progress": { 00:29:34.291 "blocks": 3840, 00:29:34.291 "percent": 48 00:29:34.291 } 00:29:34.291 }, 00:29:34.291 "base_bdevs_list": [ 00:29:34.291 { 00:29:34.291 "name": "spare", 00:29:34.291 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:34.291 "is_configured": true, 00:29:34.291 "data_offset": 256, 00:29:34.291 "data_size": 7936 00:29:34.291 }, 00:29:34.291 { 00:29:34.291 "name": "BaseBdev2", 00:29:34.291 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:34.291 "is_configured": true, 00:29:34.291 "data_offset": 256, 00:29:34.291 "data_size": 7936 00:29:34.291 } 00:29:34.291 ] 00:29:34.291 }' 00:29:34.291 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.291 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.291 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:34.291 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:34.291 19:13:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.670 19:13:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.670 19:13:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:35.670 "name": "raid_bdev1", 00:29:35.670 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:35.670 "strip_size_kb": 0, 00:29:35.670 "state": "online", 00:29:35.670 "raid_level": "raid1", 00:29:35.670 "superblock": true, 00:29:35.670 "num_base_bdevs": 2, 00:29:35.670 "num_base_bdevs_discovered": 2, 00:29:35.670 "num_base_bdevs_operational": 2, 00:29:35.670 "process": { 00:29:35.670 "type": "rebuild", 00:29:35.670 "target": "spare", 00:29:35.670 "progress": { 00:29:35.670 "blocks": 7168, 00:29:35.670 "percent": 90 00:29:35.670 } 00:29:35.670 }, 00:29:35.670 "base_bdevs_list": [ 00:29:35.670 { 00:29:35.670 "name": "spare", 00:29:35.670 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:35.670 "is_configured": true, 00:29:35.670 "data_offset": 256, 00:29:35.670 "data_size": 7936 00:29:35.670 }, 00:29:35.670 { 00:29:35.670 "name": "BaseBdev2", 00:29:35.670 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:35.670 "is_configured": true, 00:29:35.670 "data_offset": 256, 00:29:35.670 "data_size": 7936 00:29:35.670 } 00:29:35.670 ] 00:29:35.670 }' 00:29:35.670 19:13:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:35.670 19:13:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:35.671 19:13:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.671 19:13:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.671 19:13:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:35.929 [2024-06-10 19:13:50.461143] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:35.929 [2024-06-10 19:13:50.461197] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:35.929 [2024-06-10 19:13:50.461271] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.867 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.867 "name": "raid_bdev1", 00:29:36.867 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:36.867 "strip_size_kb": 0, 00:29:36.867 "state": "online", 00:29:36.867 "raid_level": "raid1", 00:29:36.867 "superblock": true, 00:29:36.867 "num_base_bdevs": 2, 00:29:36.867 "num_base_bdevs_discovered": 2, 00:29:36.867 "num_base_bdevs_operational": 2, 00:29:36.867 "base_bdevs_list": [ 00:29:36.867 { 00:29:36.867 "name": "spare", 00:29:36.867 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:36.867 "is_configured": true, 00:29:36.867 "data_offset": 256, 00:29:36.867 "data_size": 7936 00:29:36.867 }, 00:29:36.867 { 00:29:36.867 "name": "BaseBdev2", 00:29:36.867 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:36.867 "is_configured": true, 00:29:36.867 "data_offset": 256, 00:29:36.868 "data_size": 7936 00:29:36.868 } 00:29:36.868 ] 00:29:36.868 }' 00:29:36.868 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.868 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:36.868 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.127 "name": "raid_bdev1", 00:29:37.127 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:37.127 "strip_size_kb": 0, 00:29:37.127 "state": "online", 00:29:37.127 "raid_level": "raid1", 00:29:37.127 "superblock": true, 00:29:37.127 "num_base_bdevs": 2, 00:29:37.127 "num_base_bdevs_discovered": 2, 00:29:37.127 "num_base_bdevs_operational": 2, 00:29:37.127 "base_bdevs_list": [ 00:29:37.127 { 00:29:37.127 "name": "spare", 00:29:37.127 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:37.127 "is_configured": true, 00:29:37.127 "data_offset": 256, 00:29:37.127 "data_size": 7936 00:29:37.127 }, 00:29:37.127 { 00:29:37.127 "name": "BaseBdev2", 00:29:37.127 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:37.127 "is_configured": true, 00:29:37.127 "data_offset": 256, 00:29:37.127 "data_size": 7936 00:29:37.127 } 00:29:37.127 ] 00:29:37.127 }' 00:29:37.127 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.387 19:13:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.646 19:13:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.646 "name": "raid_bdev1", 00:29:37.646 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:37.646 "strip_size_kb": 0, 00:29:37.646 "state": "online", 00:29:37.646 "raid_level": "raid1", 00:29:37.646 "superblock": true, 00:29:37.646 "num_base_bdevs": 2, 00:29:37.646 "num_base_bdevs_discovered": 2, 00:29:37.646 "num_base_bdevs_operational": 2, 00:29:37.646 "base_bdevs_list": [ 00:29:37.646 { 00:29:37.646 "name": "spare", 00:29:37.646 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:37.646 "is_configured": true, 00:29:37.646 "data_offset": 256, 00:29:37.646 "data_size": 7936 00:29:37.646 }, 00:29:37.646 { 00:29:37.646 "name": "BaseBdev2", 00:29:37.646 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:37.646 "is_configured": true, 00:29:37.646 "data_offset": 256, 00:29:37.646 "data_size": 7936 00:29:37.646 } 00:29:37.646 ] 00:29:37.646 }' 00:29:37.646 19:13:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.646 19:13:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:38.215 19:13:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:38.215 [2024-06-10 19:13:52.968561] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:38.215 [2024-06-10 19:13:52.968590] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:38.215 [2024-06-10 19:13:52.968639] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:38.215 [2024-06-10 19:13:52.968689] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:38.215 [2024-06-10 19:13:52.968700] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a163b0 name raid_bdev1, state offline 00:29:38.474 19:13:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.474 19:13:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:29:38.474 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:38.474 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:29:38.474 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:38.474 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:38.733 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:38.994 [2024-06-10 19:13:53.646440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:38.994 [2024-06-10 19:13:53.646479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.994 [2024-06-10 19:13:53.646496] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a18280 00:29:38.994 [2024-06-10 19:13:53.646507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.994 [2024-06-10 19:13:53.648168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.994 [2024-06-10 19:13:53.648195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:38.994 [2024-06-10 19:13:53.648248] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:38.994 [2024-06-10 19:13:53.648272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.994 [2024-06-10 19:13:53.648348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:38.994 spare 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:38.994 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:38.995 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.995 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.995 [2024-06-10 19:13:53.748651] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a14c90 00:29:38.995 [2024-06-10 19:13:53.748667] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:38.995 [2024-06-10 19:13:53.748734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a17d70 00:29:38.995 [2024-06-10 19:13:53.748816] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a14c90 00:29:38.995 [2024-06-10 19:13:53.748826] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a14c90 00:29:38.995 [2024-06-10 19:13:53.748886] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.254 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:39.254 "name": "raid_bdev1", 00:29:39.254 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:39.254 "strip_size_kb": 0, 00:29:39.254 "state": "online", 00:29:39.254 "raid_level": "raid1", 00:29:39.254 "superblock": true, 00:29:39.254 "num_base_bdevs": 2, 00:29:39.254 "num_base_bdevs_discovered": 2, 00:29:39.254 "num_base_bdevs_operational": 2, 00:29:39.254 "base_bdevs_list": [ 00:29:39.254 { 00:29:39.254 "name": "spare", 00:29:39.254 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:39.254 "is_configured": true, 00:29:39.254 "data_offset": 256, 00:29:39.254 "data_size": 7936 00:29:39.254 }, 00:29:39.254 { 00:29:39.254 "name": "BaseBdev2", 00:29:39.254 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:39.254 "is_configured": true, 00:29:39.254 "data_offset": 256, 00:29:39.254 "data_size": 7936 00:29:39.254 } 00:29:39.254 ] 00:29:39.254 }' 00:29:39.254 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:39.254 19:13:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.822 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.083 "name": "raid_bdev1", 00:29:40.083 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:40.083 "strip_size_kb": 0, 00:29:40.083 "state": "online", 00:29:40.083 "raid_level": "raid1", 00:29:40.083 "superblock": true, 00:29:40.083 "num_base_bdevs": 2, 00:29:40.083 "num_base_bdevs_discovered": 2, 00:29:40.083 "num_base_bdevs_operational": 2, 00:29:40.083 "base_bdevs_list": [ 00:29:40.083 { 00:29:40.083 "name": "spare", 00:29:40.083 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:40.083 "is_configured": true, 00:29:40.083 "data_offset": 256, 00:29:40.083 "data_size": 7936 00:29:40.083 }, 00:29:40.083 { 00:29:40.083 "name": "BaseBdev2", 00:29:40.083 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:40.083 "is_configured": true, 00:29:40.083 "data_offset": 256, 00:29:40.083 "data_size": 7936 00:29:40.083 } 00:29:40.083 ] 00:29:40.083 }' 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.083 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:40.342 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.342 19:13:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:40.603 [2024-06-10 19:13:55.210645] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.603 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.862 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.862 "name": "raid_bdev1", 00:29:40.862 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:40.862 "strip_size_kb": 0, 00:29:40.862 "state": "online", 00:29:40.862 "raid_level": "raid1", 00:29:40.862 "superblock": true, 00:29:40.862 "num_base_bdevs": 2, 00:29:40.862 "num_base_bdevs_discovered": 1, 00:29:40.862 "num_base_bdevs_operational": 1, 00:29:40.862 "base_bdevs_list": [ 00:29:40.862 { 00:29:40.862 "name": null, 00:29:40.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.862 "is_configured": false, 00:29:40.862 "data_offset": 256, 00:29:40.862 "data_size": 7936 00:29:40.862 }, 00:29:40.862 { 00:29:40.862 "name": "BaseBdev2", 00:29:40.862 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:40.862 "is_configured": true, 00:29:40.862 "data_offset": 256, 00:29:40.862 "data_size": 7936 00:29:40.862 } 00:29:40.862 ] 00:29:40.862 }' 00:29:40.862 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.862 19:13:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:41.432 19:13:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:41.692 [2024-06-10 19:13:56.217305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:41.692 [2024-06-10 19:13:56.217434] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:41.692 [2024-06-10 19:13:56.217450] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:41.692 [2024-06-10 19:13:56.217476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:41.692 [2024-06-10 19:13:56.220805] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a1daf0 00:29:41.692 [2024-06-10 19:13:56.222868] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:41.692 19:13:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.630 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.889 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.889 "name": "raid_bdev1", 00:29:42.889 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:42.889 "strip_size_kb": 0, 00:29:42.889 "state": "online", 00:29:42.889 "raid_level": "raid1", 00:29:42.889 "superblock": true, 00:29:42.889 "num_base_bdevs": 2, 00:29:42.889 "num_base_bdevs_discovered": 2, 00:29:42.889 "num_base_bdevs_operational": 2, 00:29:42.889 "process": { 00:29:42.889 "type": "rebuild", 00:29:42.889 "target": "spare", 00:29:42.889 "progress": { 00:29:42.889 "blocks": 3072, 00:29:42.889 "percent": 38 00:29:42.889 } 00:29:42.889 }, 00:29:42.889 "base_bdevs_list": [ 00:29:42.889 { 00:29:42.889 "name": "spare", 00:29:42.889 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:42.889 "is_configured": true, 00:29:42.889 "data_offset": 256, 00:29:42.889 "data_size": 7936 00:29:42.889 }, 00:29:42.889 { 00:29:42.889 "name": "BaseBdev2", 00:29:42.889 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:42.889 "is_configured": true, 00:29:42.889 "data_offset": 256, 00:29:42.889 "data_size": 7936 00:29:42.889 } 00:29:42.889 ] 00:29:42.889 }' 00:29:42.889 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.889 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.889 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.889 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.889 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:43.219 [2024-06-10 19:13:57.771834] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:43.219 [2024-06-10 19:13:57.834523] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:43.219 [2024-06-10 19:13:57.834564] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:43.219 [2024-06-10 19:13:57.834585] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:43.219 [2024-06-10 19:13:57.834593] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.219 19:13:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.486 19:13:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:43.486 "name": "raid_bdev1", 00:29:43.486 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:43.486 "strip_size_kb": 0, 00:29:43.486 "state": "online", 00:29:43.486 "raid_level": "raid1", 00:29:43.486 "superblock": true, 00:29:43.486 "num_base_bdevs": 2, 00:29:43.486 "num_base_bdevs_discovered": 1, 00:29:43.486 "num_base_bdevs_operational": 1, 00:29:43.486 "base_bdevs_list": [ 00:29:43.486 { 00:29:43.486 "name": null, 00:29:43.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.486 "is_configured": false, 00:29:43.487 "data_offset": 256, 00:29:43.487 "data_size": 7936 00:29:43.487 }, 00:29:43.487 { 00:29:43.487 "name": "BaseBdev2", 00:29:43.487 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:43.487 "is_configured": true, 00:29:43.487 "data_offset": 256, 00:29:43.487 "data_size": 7936 00:29:43.487 } 00:29:43.487 ] 00:29:43.487 }' 00:29:43.487 19:13:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:43.487 19:13:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:44.055 19:13:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:44.314 [2024-06-10 19:13:58.864882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:44.314 [2024-06-10 19:13:58.864929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:44.314 [2024-06-10 19:13:58.864947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a15010 00:29:44.314 [2024-06-10 19:13:58.864959] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:44.314 [2024-06-10 19:13:58.865123] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:44.314 [2024-06-10 19:13:58.865138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:44.314 [2024-06-10 19:13:58.865187] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:44.314 [2024-06-10 19:13:58.865197] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:44.314 [2024-06-10 19:13:58.865207] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:44.314 [2024-06-10 19:13:58.865223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:44.314 [2024-06-10 19:13:58.868538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a190f0 00:29:44.314 [2024-06-10 19:13:58.869911] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:44.314 spare 00:29:44.314 19:13:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:45.252 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.252 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.252 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.252 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.253 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.253 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.253 19:13:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.512 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.512 "name": "raid_bdev1", 00:29:45.512 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:45.512 "strip_size_kb": 0, 00:29:45.512 "state": "online", 00:29:45.512 "raid_level": "raid1", 00:29:45.512 "superblock": true, 00:29:45.512 "num_base_bdevs": 2, 00:29:45.512 "num_base_bdevs_discovered": 2, 00:29:45.512 "num_base_bdevs_operational": 2, 00:29:45.512 "process": { 00:29:45.512 "type": "rebuild", 00:29:45.512 "target": "spare", 00:29:45.512 "progress": { 00:29:45.512 "blocks": 3072, 00:29:45.512 "percent": 38 00:29:45.512 } 00:29:45.512 }, 00:29:45.512 "base_bdevs_list": [ 00:29:45.512 { 00:29:45.512 "name": "spare", 00:29:45.512 "uuid": "c01372b1-cca6-58b4-966d-6db45e98c16b", 00:29:45.512 "is_configured": true, 00:29:45.512 "data_offset": 256, 00:29:45.512 "data_size": 7936 00:29:45.512 }, 00:29:45.512 { 00:29:45.512 "name": "BaseBdev2", 00:29:45.512 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:45.512 "is_configured": true, 00:29:45.512 "data_offset": 256, 00:29:45.512 "data_size": 7936 00:29:45.512 } 00:29:45.512 ] 00:29:45.512 }' 00:29:45.512 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.512 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:45.512 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.512 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:45.512 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:45.770 [2024-06-10 19:14:00.423315] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:45.770 [2024-06-10 19:14:00.481588] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:45.770 [2024-06-10 19:14:00.481627] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.770 [2024-06-10 19:14:00.481641] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:45.770 [2024-06-10 19:14:00.481648] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.770 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.029 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:46.029 "name": "raid_bdev1", 00:29:46.029 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:46.029 "strip_size_kb": 0, 00:29:46.029 "state": "online", 00:29:46.029 "raid_level": "raid1", 00:29:46.029 "superblock": true, 00:29:46.029 "num_base_bdevs": 2, 00:29:46.029 "num_base_bdevs_discovered": 1, 00:29:46.029 "num_base_bdevs_operational": 1, 00:29:46.029 "base_bdevs_list": [ 00:29:46.029 { 00:29:46.029 "name": null, 00:29:46.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.029 "is_configured": false, 00:29:46.029 "data_offset": 256, 00:29:46.029 "data_size": 7936 00:29:46.029 }, 00:29:46.029 { 00:29:46.029 "name": "BaseBdev2", 00:29:46.029 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:46.029 "is_configured": true, 00:29:46.029 "data_offset": 256, 00:29:46.029 "data_size": 7936 00:29:46.029 } 00:29:46.029 ] 00:29:46.029 }' 00:29:46.029 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:46.029 19:14:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.598 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.857 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.857 "name": "raid_bdev1", 00:29:46.857 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:46.857 "strip_size_kb": 0, 00:29:46.857 "state": "online", 00:29:46.857 "raid_level": "raid1", 00:29:46.857 "superblock": true, 00:29:46.857 "num_base_bdevs": 2, 00:29:46.857 "num_base_bdevs_discovered": 1, 00:29:46.857 "num_base_bdevs_operational": 1, 00:29:46.857 "base_bdevs_list": [ 00:29:46.857 { 00:29:46.857 "name": null, 00:29:46.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.857 "is_configured": false, 00:29:46.857 "data_offset": 256, 00:29:46.857 "data_size": 7936 00:29:46.857 }, 00:29:46.857 { 00:29:46.857 "name": "BaseBdev2", 00:29:46.857 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:46.857 "is_configured": true, 00:29:46.857 "data_offset": 256, 00:29:46.857 "data_size": 7936 00:29:46.857 } 00:29:46.857 ] 00:29:46.857 }' 00:29:46.857 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.857 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:46.857 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.116 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:47.116 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:47.116 19:14:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:47.376 [2024-06-10 19:14:02.073335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:47.376 [2024-06-10 19:14:02.073374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:47.376 [2024-06-10 19:14:02.073394] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba2470 00:29:47.376 [2024-06-10 19:14:02.073405] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:47.376 [2024-06-10 19:14:02.073549] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:47.376 [2024-06-10 19:14:02.073563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:47.376 [2024-06-10 19:14:02.073610] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:47.376 [2024-06-10 19:14:02.073621] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:47.376 [2024-06-10 19:14:02.073631] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:47.376 BaseBdev1 00:29:47.376 19:14:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.755 "name": "raid_bdev1", 00:29:48.755 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:48.755 "strip_size_kb": 0, 00:29:48.755 "state": "online", 00:29:48.755 "raid_level": "raid1", 00:29:48.755 "superblock": true, 00:29:48.755 "num_base_bdevs": 2, 00:29:48.755 "num_base_bdevs_discovered": 1, 00:29:48.755 "num_base_bdevs_operational": 1, 00:29:48.755 "base_bdevs_list": [ 00:29:48.755 { 00:29:48.755 "name": null, 00:29:48.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.755 "is_configured": false, 00:29:48.755 "data_offset": 256, 00:29:48.755 "data_size": 7936 00:29:48.755 }, 00:29:48.755 { 00:29:48.755 "name": "BaseBdev2", 00:29:48.755 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:48.755 "is_configured": true, 00:29:48.755 "data_offset": 256, 00:29:48.755 "data_size": 7936 00:29:48.755 } 00:29:48.755 ] 00:29:48.755 }' 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.755 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.324 19:14:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.584 "name": "raid_bdev1", 00:29:49.584 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:49.584 "strip_size_kb": 0, 00:29:49.584 "state": "online", 00:29:49.584 "raid_level": "raid1", 00:29:49.584 "superblock": true, 00:29:49.584 "num_base_bdevs": 2, 00:29:49.584 "num_base_bdevs_discovered": 1, 00:29:49.584 "num_base_bdevs_operational": 1, 00:29:49.584 "base_bdevs_list": [ 00:29:49.584 { 00:29:49.584 "name": null, 00:29:49.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.584 "is_configured": false, 00:29:49.584 "data_offset": 256, 00:29:49.584 "data_size": 7936 00:29:49.584 }, 00:29:49.584 { 00:29:49.584 "name": "BaseBdev2", 00:29:49.584 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:49.584 "is_configured": true, 00:29:49.584 "data_offset": 256, 00:29:49.584 "data_size": 7936 00:29:49.584 } 00:29:49.584 ] 00:29:49.584 }' 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # local es=0 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:49.584 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:49.844 [2024-06-10 19:14:04.427617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:49.844 [2024-06-10 19:14:04.427722] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:49.844 [2024-06-10 19:14:04.427736] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:49.844 request: 00:29:49.844 { 00:29:49.844 "raid_bdev": "raid_bdev1", 00:29:49.844 "base_bdev": "BaseBdev1", 00:29:49.844 "method": "bdev_raid_add_base_bdev", 00:29:49.844 "req_id": 1 00:29:49.844 } 00:29:49.844 Got JSON-RPC error response 00:29:49.844 response: 00:29:49.844 { 00:29:49.844 "code": -22, 00:29:49.844 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:49.844 } 00:29:49.844 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # es=1 00:29:49.844 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:49.844 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:49.844 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:49.844 19:14:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.782 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.042 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:51.042 "name": "raid_bdev1", 00:29:51.042 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:51.042 "strip_size_kb": 0, 00:29:51.042 "state": "online", 00:29:51.042 "raid_level": "raid1", 00:29:51.042 "superblock": true, 00:29:51.042 "num_base_bdevs": 2, 00:29:51.042 "num_base_bdevs_discovered": 1, 00:29:51.042 "num_base_bdevs_operational": 1, 00:29:51.042 "base_bdevs_list": [ 00:29:51.042 { 00:29:51.042 "name": null, 00:29:51.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.042 "is_configured": false, 00:29:51.042 "data_offset": 256, 00:29:51.042 "data_size": 7936 00:29:51.042 }, 00:29:51.042 { 00:29:51.042 "name": "BaseBdev2", 00:29:51.042 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:51.042 "is_configured": true, 00:29:51.042 "data_offset": 256, 00:29:51.042 "data_size": 7936 00:29:51.042 } 00:29:51.042 ] 00:29:51.042 }' 00:29:51.042 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:51.042 19:14:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.610 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.869 "name": "raid_bdev1", 00:29:51.869 "uuid": "45313a25-2637-472f-ad71-b3177abbc422", 00:29:51.869 "strip_size_kb": 0, 00:29:51.869 "state": "online", 00:29:51.869 "raid_level": "raid1", 00:29:51.869 "superblock": true, 00:29:51.869 "num_base_bdevs": 2, 00:29:51.869 "num_base_bdevs_discovered": 1, 00:29:51.869 "num_base_bdevs_operational": 1, 00:29:51.869 "base_bdevs_list": [ 00:29:51.869 { 00:29:51.869 "name": null, 00:29:51.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.869 "is_configured": false, 00:29:51.869 "data_offset": 256, 00:29:51.869 "data_size": 7936 00:29:51.869 }, 00:29:51.869 { 00:29:51.869 "name": "BaseBdev2", 00:29:51.869 "uuid": "2c093e89-4b0e-5417-a697-b5fdcd3863bd", 00:29:51.869 "is_configured": true, 00:29:51.869 "data_offset": 256, 00:29:51.869 "data_size": 7936 00:29:51.869 } 00:29:51.869 ] 00:29:51.869 }' 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 1809619 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 1809619 ']' 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 1809619 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:51.869 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1809619 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1809619' 00:29:52.129 killing process with pid 1809619 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # kill 1809619 00:29:52.129 Received shutdown signal, test time was about 60.000000 seconds 00:29:52.129 00:29:52.129 Latency(us) 00:29:52.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.129 =================================================================================================================== 00:29:52.129 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:52.129 [2024-06-10 19:14:06.627514] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:52.129 [2024-06-10 19:14:06.627598] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:52.129 [2024-06-10 19:14:06.627635] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:52.129 [2024-06-10 19:14:06.627646] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a14c90 name raid_bdev1, state offline 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # wait 1809619 00:29:52.129 [2024-06-10 19:14:06.651699] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:29:52.129 00:29:52.129 real 0m27.594s 00:29:52.129 user 0m43.564s 00:29:52.129 sys 0m3.652s 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:52.129 19:14:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:52.129 ************************************ 00:29:52.129 END TEST raid_rebuild_test_sb_md_interleaved 00:29:52.129 ************************************ 00:29:52.389 19:14:06 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:29:52.389 19:14:06 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:29:52.389 19:14:06 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 1809619 ']' 00:29:52.389 19:14:06 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 1809619 00:29:52.389 19:14:06 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:29:52.389 00:29:52.389 real 17m21.200s 00:29:52.389 user 29m12.741s 00:29:52.389 sys 3m11.659s 00:29:52.389 19:14:06 bdev_raid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:52.389 19:14:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:52.389 ************************************ 00:29:52.389 END TEST bdev_raid 00:29:52.389 ************************************ 00:29:52.389 19:14:06 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:29:52.389 19:14:06 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:52.389 19:14:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:52.389 19:14:06 -- common/autotest_common.sh@10 -- # set +x 00:29:52.389 ************************************ 00:29:52.389 START TEST bdevperf_config 00:29:52.389 ************************************ 00:29:52.389 19:14:07 bdevperf_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:29:52.389 * Looking for test storage... 00:29:52.389 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:52.389 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:29:52.389 19:14:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:52.648 19:14:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:52.648 19:14:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:52.648 19:14:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:52.648 19:14:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:52.649 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:52.649 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:52.649 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:52.649 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:52.649 19:14:07 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:55.186 19:14:09 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-10 19:14:07.228381] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:29:55.186 [2024-06-10 19:14:07.228444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814630 ] 00:29:55.186 Using job config with 4 jobs 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.0 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.1 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.2 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.3 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.4 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.5 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.6 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:01.7 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.0 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.1 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.2 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.3 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.4 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.5 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.6 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b6:02.7 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.0 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.1 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.2 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.3 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.4 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.5 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.6 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:01.7 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.0 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.1 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.2 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.3 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.4 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.5 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.6 cannot be used 00:29:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.186 EAL: Requested device 0000:b8:02.7 cannot be used 00:29:55.186 [2024-06-10 19:14:07.374437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.186 [2024-06-10 19:14:07.487640] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.186 cpumask for '\''job0'\'' is too big 00:29:55.186 cpumask for '\''job1'\'' is too big 00:29:55.186 cpumask for '\''job2'\'' is too big 00:29:55.186 cpumask for '\''job3'\'' is too big 00:29:55.186 Running I/O for 2 seconds... 00:29:55.186 00:29:55.186 Latency(us) 00:29:55.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.186 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.186 Malloc0 : 2.02 26514.58 25.89 0.00 0.00 9646.83 1690.83 14889.78 00:29:55.186 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.186 Malloc0 : 2.02 26492.39 25.87 0.00 0.00 9634.65 1671.17 13159.63 00:29:55.186 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.186 Malloc0 : 2.02 26470.32 25.85 0.00 0.00 9623.86 1730.15 11429.48 00:29:55.186 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.186 Malloc0 : 2.02 26448.28 25.83 0.00 0.00 9611.93 1671.17 10013.90 00:29:55.186 =================================================================================================================== 00:29:55.186 Total : 105925.56 103.44 0.00 0.00 9629.32 1671.17 14889.78' 00:29:55.186 19:14:09 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-10 19:14:07.228381] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:29:55.187 [2024-06-10 19:14:07.228444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814630 ] 00:29:55.187 Using job config with 4 jobs 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.6 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.7 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.6 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.7 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.6 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.7 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.6 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:02.7 cannot be used 00:29:55.187 [2024-06-10 19:14:07.374437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.187 [2024-06-10 19:14:07.487640] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.187 cpumask for '\''job0'\'' is too big 00:29:55.187 cpumask for '\''job1'\'' is too big 00:29:55.187 cpumask for '\''job2'\'' is too big 00:29:55.187 cpumask for '\''job3'\'' is too big 00:29:55.187 Running I/O for 2 seconds... 00:29:55.187 00:29:55.187 Latency(us) 00:29:55.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.187 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.187 Malloc0 : 2.02 26514.58 25.89 0.00 0.00 9646.83 1690.83 14889.78 00:29:55.187 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.187 Malloc0 : 2.02 26492.39 25.87 0.00 0.00 9634.65 1671.17 13159.63 00:29:55.187 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.187 Malloc0 : 2.02 26470.32 25.85 0.00 0.00 9623.86 1730.15 11429.48 00:29:55.187 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.187 Malloc0 : 2.02 26448.28 25.83 0.00 0.00 9611.93 1671.17 10013.90 00:29:55.187 =================================================================================================================== 00:29:55.187 Total : 105925.56 103.44 0.00 0.00 9629.32 1671.17 14889.78' 00:29:55.187 19:14:09 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:55.187 19:14:09 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 19:14:07.228381] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:29:55.187 [2024-06-10 19:14:07.228444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814630 ] 00:29:55.187 Using job config with 4 jobs 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.6 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:01.7 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.6 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b6:02.7 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.0 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.1 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.2 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.3 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.4 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.187 EAL: Requested device 0000:b8:01.5 cannot be used 00:29:55.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:01.6 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:01.7 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.0 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.1 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.2 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.3 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.4 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.5 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.6 cannot be used 00:29:55.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.188 EAL: Requested device 0000:b8:02.7 cannot be used 00:29:55.188 [2024-06-10 19:14:07.374437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.188 [2024-06-10 19:14:07.487640] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.188 cpumask for '\''job0'\'' is too big 00:29:55.188 cpumask for '\''job1'\'' is too big 00:29:55.188 cpumask for '\''job2'\'' is too big 00:29:55.188 cpumask for '\''job3'\'' is too big 00:29:55.188 Running I/O for 2 seconds... 00:29:55.188 00:29:55.188 Latency(us) 00:29:55.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.188 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.188 Malloc0 : 2.02 26514.58 25.89 0.00 0.00 9646.83 1690.83 14889.78 00:29:55.188 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.188 Malloc0 : 2.02 26492.39 25.87 0.00 0.00 9634.65 1671.17 13159.63 00:29:55.188 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.188 Malloc0 : 2.02 26470.32 25.85 0.00 0.00 9623.86 1730.15 11429.48 00:29:55.188 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:55.188 Malloc0 : 2.02 26448.28 25.83 0.00 0.00 9611.93 1671.17 10013.90 00:29:55.188 =================================================================================================================== 00:29:55.188 Total : 105925.56 103.44 0.00 0.00 9629.32 1671.17 14889.78' 00:29:55.188 19:14:09 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:55.188 19:14:09 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:29:55.188 19:14:09 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:55.188 [2024-06-10 19:14:09.917871] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:29:55.188 [2024-06-10 19:14:09.917920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815157 ] 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.0 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.1 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.2 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.3 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.4 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.5 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.6 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:01.7 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.0 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.1 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.2 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.3 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.4 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.5 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.6 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b6:02.7 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.0 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.1 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.2 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.3 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.4 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.5 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.6 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:01.7 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.0 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.1 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.2 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.3 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.4 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.5 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.6 cannot be used 00:29:55.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:55.448 EAL: Requested device 0000:b8:02.7 cannot be used 00:29:55.448 [2024-06-10 19:14:10.051791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.448 [2024-06-10 19:14:10.152457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.707 cpumask for 'job0' is too big 00:29:55.707 cpumask for 'job1' is too big 00:29:55.707 cpumask for 'job2' is too big 00:29:55.707 cpumask for 'job3' is too big 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:29:58.244 Running I/O for 2 seconds... 00:29:58.244 00:29:58.244 Latency(us) 00:29:58.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.244 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:58.244 Malloc0 : 2.02 26620.41 26.00 0.00 0.00 9612.83 1717.04 14784.92 00:29:58.244 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:58.244 Malloc0 : 2.02 26598.19 25.97 0.00 0.00 9601.05 1703.94 13107.20 00:29:58.244 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:58.244 Malloc0 : 2.02 26576.00 25.95 0.00 0.00 9589.07 1677.72 11429.48 00:29:58.244 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:58.244 Malloc0 : 2.02 26553.89 25.93 0.00 0.00 9577.99 1671.17 9961.47 00:29:58.244 =================================================================================================================== 00:29:58.244 Total : 106348.49 103.86 0.00 0.00 9595.24 1671.17 14784.92' 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:58.244 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:58.244 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:58.244 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:58.244 19:14:12 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:00.781 19:14:15 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-10 19:14:12.616453] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:00.781 [2024-06-10 19:14:12.616517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815657 ] 00:30:00.781 Using job config with 3 jobs 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.781 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:00.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:00.782 [2024-06-10 19:14:12.762977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.782 [2024-06-10 19:14:12.861827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.782 cpumask for '\''job0'\'' is too big 00:30:00.782 cpumask for '\''job1'\'' is too big 00:30:00.782 cpumask for '\''job2'\'' is too big 00:30:00.782 Running I/O for 2 seconds... 00:30:00.782 00:30:00.782 Latency(us) 00:30:00.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.782 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.782 Malloc0 : 2.01 36048.50 35.20 0.00 0.00 7100.18 1638.40 10643.05 00:30:00.782 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.782 Malloc0 : 2.01 36018.23 35.17 0.00 0.00 7091.87 1625.29 8965.32 00:30:00.782 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.782 Malloc0 : 2.02 36072.31 35.23 0.00 0.00 7066.56 838.86 7444.89 00:30:00.782 =================================================================================================================== 00:30:00.782 Total : 108139.04 105.60 0.00 0.00 7086.18 838.86 10643.05' 00:30:00.782 19:14:15 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-10 19:14:12.616453] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:00.782 [2024-06-10 19:14:12.616517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815657 ] 00:30:00.782 Using job config with 3 jobs 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:00.782 [2024-06-10 19:14:12.762977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.782 [2024-06-10 19:14:12.861827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.782 cpumask for '\''job0'\'' is too big 00:30:00.782 cpumask for '\''job1'\'' is too big 00:30:00.782 cpumask for '\''job2'\'' is too big 00:30:00.782 Running I/O for 2 seconds... 00:30:00.782 00:30:00.782 Latency(us) 00:30:00.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.782 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.782 Malloc0 : 2.01 36048.50 35.20 0.00 0.00 7100.18 1638.40 10643.05 00:30:00.782 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.782 Malloc0 : 2.01 36018.23 35.17 0.00 0.00 7091.87 1625.29 8965.32 00:30:00.782 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.782 Malloc0 : 2.02 36072.31 35.23 0.00 0.00 7066.56 838.86 7444.89 00:30:00.782 =================================================================================================================== 00:30:00.782 Total : 108139.04 105.60 0.00 0.00 7086.18 838.86 10643.05' 00:30:00.782 19:14:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:30:00.782 19:14:15 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 19:14:12.616453] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:00.782 [2024-06-10 19:14:12.616517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815657 ] 00:30:00.782 Using job config with 3 jobs 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:00.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.782 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:00.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:00.783 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:00.783 [2024-06-10 19:14:12.762977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.783 [2024-06-10 19:14:12.861827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.783 cpumask for '\''job0'\'' is too big 00:30:00.783 cpumask for '\''job1'\'' is too big 00:30:00.783 cpumask for '\''job2'\'' is too big 00:30:00.783 Running I/O for 2 seconds... 00:30:00.783 00:30:00.783 Latency(us) 00:30:00.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.783 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.783 Malloc0 : 2.01 36048.50 35.20 0.00 0.00 7100.18 1638.40 10643.05 00:30:00.783 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.783 Malloc0 : 2.01 36018.23 35.17 0.00 0.00 7091.87 1625.29 8965.32 00:30:00.783 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:00.783 Malloc0 : 2.02 36072.31 35.23 0.00 0.00 7066.56 838.86 7444.89 00:30:00.783 =================================================================================================================== 00:30:00.783 Total : 108139.04 105.60 0.00 0.00 7086.18 838.86 10643.05' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:00.783 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:00.783 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:00.783 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:00.783 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:00.783 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:00.783 19:14:15 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:03.320 19:14:18 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-10 19:14:15.352597] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:03.320 [2024-06-10 19:14:15.352666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815989 ] 00:30:03.320 Using job config with 4 jobs 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:03.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.320 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:03.320 [2024-06-10 19:14:15.504035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.320 [2024-06-10 19:14:15.602031] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.321 cpumask for '\''job0'\'' is too big 00:30:03.321 cpumask for '\''job1'\'' is too big 00:30:03.321 cpumask for '\''job2'\'' is too big 00:30:03.321 cpumask for '\''job3'\'' is too big 00:30:03.321 Running I/O for 2 seconds... 00:30:03.321 00:30:03.321 Latency(us) 00:30:03.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.03 13248.43 12.94 0.00 0.00 19304.66 3460.30 29989.27 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc1 : 2.03 13237.05 12.93 0.00 0.00 19304.96 4194.30 29989.27 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.03 13226.06 12.92 0.00 0.00 19256.81 3434.09 26424.12 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc1 : 2.03 13214.80 12.91 0.00 0.00 19257.24 4168.09 26424.12 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.04 13203.85 12.89 0.00 0.00 19210.42 3434.09 22963.81 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc1 : 2.04 13270.76 12.96 0.00 0.00 19097.71 4168.09 22963.81 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.05 13259.82 12.95 0.00 0.00 19052.63 3407.87 19713.23 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc1 : 2.05 13248.61 12.94 0.00 0.00 19051.34 4194.30 19713.23 00:30:03.321 =================================================================================================================== 00:30:03.321 Total : 105909.38 103.43 0.00 0.00 19191.53 3407.87 29989.27' 00:30:03.321 19:14:18 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-10 19:14:15.352597] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:03.321 [2024-06-10 19:14:15.352666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815989 ] 00:30:03.321 Using job config with 4 jobs 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:03.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.321 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:03.321 [2024-06-10 19:14:15.504035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.321 [2024-06-10 19:14:15.602031] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.321 cpumask for '\''job0'\'' is too big 00:30:03.321 cpumask for '\''job1'\'' is too big 00:30:03.321 cpumask for '\''job2'\'' is too big 00:30:03.321 cpumask for '\''job3'\'' is too big 00:30:03.321 Running I/O for 2 seconds... 00:30:03.321 00:30:03.321 Latency(us) 00:30:03.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.03 13248.43 12.94 0.00 0.00 19304.66 3460.30 29989.27 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc1 : 2.03 13237.05 12.93 0.00 0.00 19304.96 4194.30 29989.27 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.03 13226.06 12.92 0.00 0.00 19256.81 3434.09 26424.12 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc1 : 2.03 13214.80 12.91 0.00 0.00 19257.24 4168.09 26424.12 00:30:03.321 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.321 Malloc0 : 2.04 13203.85 12.89 0.00 0.00 19210.42 3434.09 22963.81 00:30:03.321 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc1 : 2.04 13270.76 12.96 0.00 0.00 19097.71 4168.09 22963.81 00:30:03.322 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc0 : 2.05 13259.82 12.95 0.00 0.00 19052.63 3407.87 19713.23 00:30:03.322 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc1 : 2.05 13248.61 12.94 0.00 0.00 19051.34 4194.30 19713.23 00:30:03.322 =================================================================================================================== 00:30:03.322 Total : 105909.38 103.43 0.00 0.00 19191.53 3407.87 29989.27' 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 19:14:15.352597] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:03.322 [2024-06-10 19:14:15.352666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815989 ] 00:30:03.322 Using job config with 4 jobs 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:03.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.322 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:03.322 [2024-06-10 19:14:15.504035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.322 [2024-06-10 19:14:15.602031] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.322 cpumask for '\''job0'\'' is too big 00:30:03.322 cpumask for '\''job1'\'' is too big 00:30:03.322 cpumask for '\''job2'\'' is too big 00:30:03.322 cpumask for '\''job3'\'' is too big 00:30:03.322 Running I/O for 2 seconds... 00:30:03.322 00:30:03.322 Latency(us) 00:30:03.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.322 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc0 : 2.03 13248.43 12.94 0.00 0.00 19304.66 3460.30 29989.27 00:30:03.322 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc1 : 2.03 13237.05 12.93 0.00 0.00 19304.96 4194.30 29989.27 00:30:03.322 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc0 : 2.03 13226.06 12.92 0.00 0.00 19256.81 3434.09 26424.12 00:30:03.322 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc1 : 2.03 13214.80 12.91 0.00 0.00 19257.24 4168.09 26424.12 00:30:03.322 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc0 : 2.04 13203.85 12.89 0.00 0.00 19210.42 3434.09 22963.81 00:30:03.322 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc1 : 2.04 13270.76 12.96 0.00 0.00 19097.71 4168.09 22963.81 00:30:03.322 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc0 : 2.05 13259.82 12.95 0.00 0.00 19052.63 3407.87 19713.23 00:30:03.322 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:03.322 Malloc1 : 2.05 13248.61 12.94 0.00 0.00 19051.34 4194.30 19713.23 00:30:03.322 =================================================================================================================== 00:30:03.322 Total : 105909.38 103.43 0.00 0.00 19191.53 3407.87 29989.27' 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:03.322 19:14:18 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:30:03.322 00:30:03.322 real 0m11.013s 00:30:03.322 user 0m9.731s 00:30:03.322 sys 0m1.133s 00:30:03.322 19:14:18 bdevperf_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:03.322 19:14:18 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:30:03.322 ************************************ 00:30:03.322 END TEST bdevperf_config 00:30:03.322 ************************************ 00:30:03.584 19:14:18 -- spdk/autotest.sh@192 -- # uname -s 00:30:03.584 19:14:18 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:30:03.584 19:14:18 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:03.584 19:14:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:03.584 19:14:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:03.584 19:14:18 -- common/autotest_common.sh@10 -- # set +x 00:30:03.584 ************************************ 00:30:03.584 START TEST reactor_set_interrupt 00:30:03.584 ************************************ 00:30:03.584 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:03.584 * Looking for test storage... 00:30:03.584 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.584 19:14:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:30:03.584 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:03.584 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.584 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.584 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:30:03.584 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:03.585 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:30:03.585 19:14:18 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:30:03.585 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:30:03.585 19:14:18 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:03.585 #define SPDK_CONFIG_H 00:30:03.585 #define SPDK_CONFIG_APPS 1 00:30:03.585 #define SPDK_CONFIG_ARCH native 00:30:03.585 #undef SPDK_CONFIG_ASAN 00:30:03.586 #undef SPDK_CONFIG_AVAHI 00:30:03.586 #undef SPDK_CONFIG_CET 00:30:03.586 #define SPDK_CONFIG_COVERAGE 1 00:30:03.586 #define SPDK_CONFIG_CROSS_PREFIX 00:30:03.586 #define SPDK_CONFIG_CRYPTO 1 00:30:03.586 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:30:03.586 #undef SPDK_CONFIG_CUSTOMOCF 00:30:03.586 #undef SPDK_CONFIG_DAOS 00:30:03.586 #define SPDK_CONFIG_DAOS_DIR 00:30:03.586 #define SPDK_CONFIG_DEBUG 1 00:30:03.586 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:30:03.586 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:30:03.586 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:03.586 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:03.586 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:03.586 #undef SPDK_CONFIG_DPDK_UADK 00:30:03.586 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:03.586 #define SPDK_CONFIG_EXAMPLES 1 00:30:03.586 #undef SPDK_CONFIG_FC 00:30:03.586 #define SPDK_CONFIG_FC_PATH 00:30:03.586 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:03.586 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:03.586 #undef SPDK_CONFIG_FUSE 00:30:03.586 #undef SPDK_CONFIG_FUZZER 00:30:03.586 #define SPDK_CONFIG_FUZZER_LIB 00:30:03.586 #undef SPDK_CONFIG_GOLANG 00:30:03.586 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:03.586 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:03.586 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:03.586 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:03.586 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:03.586 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:03.586 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:03.586 #define SPDK_CONFIG_IDXD 1 00:30:03.586 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:03.586 #define SPDK_CONFIG_IPSEC_MB 1 00:30:03.586 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:30:03.586 #define SPDK_CONFIG_ISAL 1 00:30:03.586 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:03.586 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:03.586 #define SPDK_CONFIG_LIBDIR 00:30:03.586 #undef SPDK_CONFIG_LTO 00:30:03.586 #define SPDK_CONFIG_MAX_LCORES 00:30:03.586 #define SPDK_CONFIG_NVME_CUSE 1 00:30:03.586 #undef SPDK_CONFIG_OCF 00:30:03.586 #define SPDK_CONFIG_OCF_PATH 00:30:03.586 #define SPDK_CONFIG_OPENSSL_PATH 00:30:03.586 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:03.586 #define SPDK_CONFIG_PGO_DIR 00:30:03.586 #undef SPDK_CONFIG_PGO_USE 00:30:03.586 #define SPDK_CONFIG_PREFIX /usr/local 00:30:03.586 #undef SPDK_CONFIG_RAID5F 00:30:03.586 #undef SPDK_CONFIG_RBD 00:30:03.586 #define SPDK_CONFIG_RDMA 1 00:30:03.586 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:03.586 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:03.586 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:03.586 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:03.586 #define SPDK_CONFIG_SHARED 1 00:30:03.586 #undef SPDK_CONFIG_SMA 00:30:03.586 #define SPDK_CONFIG_TESTS 1 00:30:03.586 #undef SPDK_CONFIG_TSAN 00:30:03.586 #define SPDK_CONFIG_UBLK 1 00:30:03.586 #define SPDK_CONFIG_UBSAN 1 00:30:03.586 #undef SPDK_CONFIG_UNIT_TESTS 00:30:03.586 #undef SPDK_CONFIG_URING 00:30:03.586 #define SPDK_CONFIG_URING_PATH 00:30:03.586 #undef SPDK_CONFIG_URING_ZNS 00:30:03.586 #undef SPDK_CONFIG_USDT 00:30:03.586 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:30:03.586 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:30:03.586 #undef SPDK_CONFIG_VFIO_USER 00:30:03.586 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:03.586 #define SPDK_CONFIG_VHOST 1 00:30:03.586 #define SPDK_CONFIG_VIRTIO 1 00:30:03.586 #undef SPDK_CONFIG_VTUNE 00:30:03.586 #define SPDK_CONFIG_VTUNE_DIR 00:30:03.586 #define SPDK_CONFIG_WERROR 1 00:30:03.586 #define SPDK_CONFIG_WPDK_DIR 00:30:03.586 #undef SPDK_CONFIG_XNVME 00:30:03.586 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:03.586 19:14:18 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.586 19:14:18 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.586 19:14:18 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.586 19:14:18 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.586 19:14:18 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:30:03.586 19:14:18 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:30:03.586 19:14:18 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:03.586 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:03.587 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 1816542 ]] 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 1816542 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.4bWTA2 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:30:03.588 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.4bWTA2/tests/interrupt /tmp/spdk.4bWTA2 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:30:03.848 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=956952576 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4327477248 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=50768027648 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742280704 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10974253056 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30866427904 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12338741248 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9715712 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30869127168 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871142400 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=2015232 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:30:03.849 * Looking for test storage... 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=50768027648 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=13188845568 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.849 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1681 -- # set -o errtrace 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # true 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@1688 -- # xtrace_fd 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1816589 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1816589 /var/tmp/spdk.sock 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@830 -- # '[' -z 1816589 ']' 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:03.849 19:14:18 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:03.849 19:14:18 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:03.849 [2024-06-10 19:14:18.402461] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:03.849 [2024-06-10 19:14:18.402520] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816589 ] 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.849 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.849 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.849 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.849 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.849 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.849 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:03.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:03.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.850 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:03.850 [2024-06-10 19:14:18.537040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.109 [2024-06-10 19:14:18.632598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.109 [2024-06-10 19:14:18.632618] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.109 [2024-06-10 19:14:18.632623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.109 [2024-06-10 19:14:18.702231] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.676 19:14:19 reactor_set_interrupt -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:04.676 19:14:19 reactor_set_interrupt -- common/autotest_common.sh@863 -- # return 0 00:30:04.676 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:30:04.677 19:14:19 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.936 Malloc0 00:30:04.936 Malloc1 00:30:04.936 Malloc2 00:30:04.936 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:30:04.936 19:14:19 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:30:04.936 19:14:19 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:04.936 19:14:19 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:04.936 5000+0 records in 00:30:04.936 5000+0 records out 00:30:04.936 10240000 bytes (10 MB, 9.8 MiB) copied, 0.017647 s, 580 MB/s 00:30:04.936 19:14:19 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:30:05.195 AIO0 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 1816589 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 1816589 without_thd 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1816589 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:05.195 19:14:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:05.480 19:14:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:30:05.740 spdk_thread ids are 1 on reactor0. 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1816589 0 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1816589 0 idle 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816589 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0' 00:30:05.740 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816589 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1816589 1 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1816589 1 idle 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816601 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816601 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1816589 2 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1816589 2 idle 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:05.999 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816602 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816602 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:30:06.257 19:14:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:30:06.515 [2024-06-10 19:14:21.081546] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.515 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:30:06.774 [2024-06-10 19:14:21.309283] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:30:06.774 [2024-06-10 19:14:21.309569] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:06.774 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:30:07.032 [2024-06-10 19:14:21.533277] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:30:07.032 [2024-06-10 19:14:21.533384] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1816589 0 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1816589 0 busy 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816589 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.78 reactor_0' 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816589 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.78 reactor_0 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1816589 2 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1816589 2 busy 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:07.032 19:14:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816602 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.35 reactor_2' 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816602 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.35 reactor_2 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:07.310 19:14:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:30:07.575 [2024-06-10 19:14:22.117274] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:30:07.575 [2024-06-10 19:14:22.117378] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1816589 2 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1816589 2 idle 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816602 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2' 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816602 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:07.575 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:30:07.833 [2024-06-10 19:14:22.525264] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:30:07.833 [2024-06-10 19:14:22.525387] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:07.833 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:30:07.833 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:30:07.833 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:30:08.092 [2024-06-10 19:14:22.753661] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1816589 0 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1816589 0 idle 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1816589 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1816589 -w 256 00:30:08.092 19:14:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1816589 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.59 reactor_0' 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1816589 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.59 reactor_0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:30:08.352 19:14:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 1816589 00:30:08.352 19:14:22 reactor_set_interrupt -- common/autotest_common.sh@949 -- # '[' -z 1816589 ']' 00:30:08.352 19:14:22 reactor_set_interrupt -- common/autotest_common.sh@953 -- # kill -0 1816589 00:30:08.352 19:14:22 reactor_set_interrupt -- common/autotest_common.sh@954 -- # uname 00:30:08.352 19:14:22 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:08.352 19:14:22 reactor_set_interrupt -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1816589 00:30:08.352 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:08.352 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:08.352 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1816589' 00:30:08.352 killing process with pid 1816589 00:30:08.352 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@968 -- # kill 1816589 00:30:08.352 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@973 -- # wait 1816589 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1817456 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:08.612 19:14:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1817456 /var/tmp/spdk.sock 00:30:08.612 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@830 -- # '[' -z 1817456 ']' 00:30:08.612 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.612 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:08.612 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.612 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:08.612 19:14:23 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:08.612 [2024-06-10 19:14:23.307029] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:08.612 [2024-06-10 19:14:23.307091] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817456 ] 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:08.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:08.872 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:08.872 [2024-06-10 19:14:23.439085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:08.872 [2024-06-10 19:14:23.524928] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.872 [2024-06-10 19:14:23.525024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.872 [2024-06-10 19:14:23.525025] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.872 [2024-06-10 19:14:23.593811] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.810 19:14:24 reactor_set_interrupt -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:09.810 19:14:24 reactor_set_interrupt -- common/autotest_common.sh@863 -- # return 0 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:09.810 Malloc0 00:30:09.810 Malloc1 00:30:09.810 Malloc2 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:09.810 5000+0 records in 00:30:09.810 5000+0 records out 00:30:09.810 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0238197 s, 430 MB/s 00:30:09.810 19:14:24 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:30:10.070 AIO0 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 1817456 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 1817456 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1817456 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:10.070 19:14:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:10.330 19:14:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:30:10.330 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:30:10.330 19:14:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:30:10.330 19:14:24 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:30:10.330 19:14:24 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:10.330 19:14:25 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:30:10.330 19:14:25 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:10.330 19:14:25 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:10.330 19:14:25 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:30:10.590 spdk_thread ids are 1 on reactor0. 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1817456 0 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1817456 0 idle 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:10.590 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817456 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0' 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817456 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.37 reactor_0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1817456 1 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1817456 1 idle 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817479 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817479 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1817456 2 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1817456 2 idle 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:10.849 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817480 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817480 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:30:11.108 19:14:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:30:11.367 [2024-06-10 19:14:25.989692] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:30:11.367 [2024-06-10 19:14:25.989893] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:30:11.367 [2024-06-10 19:14:25.990081] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:11.367 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:30:11.626 [2024-06-10 19:14:26.214067] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:30:11.626 [2024-06-10 19:14:26.214247] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1817456 0 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1817456 0 busy 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:11.626 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817456 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.78 reactor_0' 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817456 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.78 reactor_0 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1817456 2 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1817456 2 busy 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817480 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2' 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817480 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:30:11.885 19:14:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:11.886 19:14:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:11.886 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:30:12.144 [2024-06-10 19:14:26.803730] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:30:12.144 [2024-06-10 19:14:26.803830] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1817456 2 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1817456 2 idle 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:12.144 19:14:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:12.145 19:14:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:12.145 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:12.145 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:12.403 19:14:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817480 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2' 00:30:12.403 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:12.403 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817480 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2 00:30:12.403 19:14:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:12.403 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:30:12.662 [2024-06-10 19:14:27.216791] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:30:12.662 [2024-06-10 19:14:27.216987] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:30:12.662 [2024-06-10 19:14:27.217010] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1817456 0 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1817456 0 idle 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1817456 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1817456 -w 256 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1817456 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.60 reactor_0' 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1817456 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.60 reactor_0 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:12.662 19:14:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:12.922 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 1817456 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@949 -- # '[' -z 1817456 ']' 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@953 -- # kill -0 1817456 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@954 -- # uname 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1817456 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1817456' 00:30:12.922 killing process with pid 1817456 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@968 -- # kill 1817456 00:30:12.922 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@973 -- # wait 1817456 00:30:13.183 19:14:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:30:13.183 19:14:27 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:30:13.183 00:30:13.183 real 0m9.567s 00:30:13.183 user 0m8.885s 00:30:13.183 sys 0m2.074s 00:30:13.183 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:13.183 19:14:27 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:13.183 ************************************ 00:30:13.183 END TEST reactor_set_interrupt 00:30:13.183 ************************************ 00:30:13.183 19:14:27 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:13.183 19:14:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:13.183 19:14:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:13.183 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:30:13.183 ************************************ 00:30:13.183 START TEST reap_unregistered_poller 00:30:13.183 ************************************ 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:13.183 * Looking for test storage... 00:30:13.183 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:13.183 19:14:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:30:13.183 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:30:13.183 19:14:27 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:30:13.184 19:14:27 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:30:13.184 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:13.184 #define SPDK_CONFIG_H 00:30:13.184 #define SPDK_CONFIG_APPS 1 00:30:13.184 #define SPDK_CONFIG_ARCH native 00:30:13.184 #undef SPDK_CONFIG_ASAN 00:30:13.184 #undef SPDK_CONFIG_AVAHI 00:30:13.184 #undef SPDK_CONFIG_CET 00:30:13.184 #define SPDK_CONFIG_COVERAGE 1 00:30:13.184 #define SPDK_CONFIG_CROSS_PREFIX 00:30:13.184 #define SPDK_CONFIG_CRYPTO 1 00:30:13.184 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:30:13.184 #undef SPDK_CONFIG_CUSTOMOCF 00:30:13.184 #undef SPDK_CONFIG_DAOS 00:30:13.184 #define SPDK_CONFIG_DAOS_DIR 00:30:13.184 #define SPDK_CONFIG_DEBUG 1 00:30:13.184 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:30:13.184 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:30:13.184 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:13.184 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:13.184 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:13.184 #undef SPDK_CONFIG_DPDK_UADK 00:30:13.184 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:13.184 #define SPDK_CONFIG_EXAMPLES 1 00:30:13.184 #undef SPDK_CONFIG_FC 00:30:13.184 #define SPDK_CONFIG_FC_PATH 00:30:13.184 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:13.184 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:13.184 #undef SPDK_CONFIG_FUSE 00:30:13.184 #undef SPDK_CONFIG_FUZZER 00:30:13.184 #define SPDK_CONFIG_FUZZER_LIB 00:30:13.184 #undef SPDK_CONFIG_GOLANG 00:30:13.184 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:13.184 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:13.184 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:13.184 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:13.184 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:13.184 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:13.184 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:13.184 #define SPDK_CONFIG_IDXD 1 00:30:13.184 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:13.184 #define SPDK_CONFIG_IPSEC_MB 1 00:30:13.184 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:30:13.184 #define SPDK_CONFIG_ISAL 1 00:30:13.184 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:13.184 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:13.184 #define SPDK_CONFIG_LIBDIR 00:30:13.184 #undef SPDK_CONFIG_LTO 00:30:13.184 #define SPDK_CONFIG_MAX_LCORES 00:30:13.184 #define SPDK_CONFIG_NVME_CUSE 1 00:30:13.184 #undef SPDK_CONFIG_OCF 00:30:13.184 #define SPDK_CONFIG_OCF_PATH 00:30:13.184 #define SPDK_CONFIG_OPENSSL_PATH 00:30:13.184 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:13.184 #define SPDK_CONFIG_PGO_DIR 00:30:13.184 #undef SPDK_CONFIG_PGO_USE 00:30:13.184 #define SPDK_CONFIG_PREFIX /usr/local 00:30:13.184 #undef SPDK_CONFIG_RAID5F 00:30:13.184 #undef SPDK_CONFIG_RBD 00:30:13.184 #define SPDK_CONFIG_RDMA 1 00:30:13.184 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:13.184 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:13.184 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:13.184 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:13.184 #define SPDK_CONFIG_SHARED 1 00:30:13.184 #undef SPDK_CONFIG_SMA 00:30:13.184 #define SPDK_CONFIG_TESTS 1 00:30:13.184 #undef SPDK_CONFIG_TSAN 00:30:13.184 #define SPDK_CONFIG_UBLK 1 00:30:13.184 #define SPDK_CONFIG_UBSAN 1 00:30:13.184 #undef SPDK_CONFIG_UNIT_TESTS 00:30:13.184 #undef SPDK_CONFIG_URING 00:30:13.184 #define SPDK_CONFIG_URING_PATH 00:30:13.184 #undef SPDK_CONFIG_URING_ZNS 00:30:13.184 #undef SPDK_CONFIG_USDT 00:30:13.184 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:30:13.184 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:30:13.184 #undef SPDK_CONFIG_VFIO_USER 00:30:13.184 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:13.184 #define SPDK_CONFIG_VHOST 1 00:30:13.184 #define SPDK_CONFIG_VIRTIO 1 00:30:13.184 #undef SPDK_CONFIG_VTUNE 00:30:13.184 #define SPDK_CONFIG_VTUNE_DIR 00:30:13.184 #define SPDK_CONFIG_WERROR 1 00:30:13.184 #define SPDK_CONFIG_WPDK_DIR 00:30:13.184 #undef SPDK_CONFIG_XNVME 00:30:13.184 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:13.184 19:14:27 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:13.184 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:13.184 19:14:27 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.184 19:14:27 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.184 19:14:27 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.184 19:14:27 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.184 19:14:27 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.184 19:14:27 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.184 19:14:27 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:30:13.184 19:14:27 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.184 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:13.184 19:14:27 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:30:13.446 19:14:27 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:30:13.447 19:14:27 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:30:13.447 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:30:13.448 19:14:27 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 1818361 ]] 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 1818361 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:30:13.448 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.adUWIk 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.adUWIk/tests/interrupt /tmp/spdk.adUWIk 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=956952576 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4327477248 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=50767855616 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742280704 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10974425088 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30866427904 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12338741248 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9715712 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30869127168 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871142400 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=2015232 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:30:13.449 * Looking for test storage... 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=50767855616 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=13189017600 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.449 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1681 -- # set -o errtrace 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # true 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@1688 -- # xtrace_fd 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:13.449 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1818402 00:30:13.449 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:13.450 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1818402 /var/tmp/spdk.sock 00:30:13.450 19:14:28 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:13.450 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@830 -- # '[' -z 1818402 ']' 00:30:13.450 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.450 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:13.450 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.450 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:13.450 19:14:28 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:13.450 [2024-06-10 19:14:28.087703] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:13.450 [2024-06-10 19:14:28.087760] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818402 ] 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:13.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.450 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:13.709 [2024-06-10 19:14:28.222988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.710 [2024-06-10 19:14:28.314618] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.710 [2024-06-10 19:14:28.314716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.710 [2024-06-10 19:14:28.314720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.710 [2024-06-10 19:14:28.384203] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:14.276 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:14.276 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@863 -- # return 0 00:30:14.276 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:30:14.276 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:30:14.276 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.276 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:14.276 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:30:14.536 "name": "app_thread", 00:30:14.536 "id": 1, 00:30:14.536 "active_pollers": [], 00:30:14.536 "timed_pollers": [ 00:30:14.536 { 00:30:14.536 "name": "rpc_subsystem_poll_servers", 00:30:14.536 "id": 1, 00:30:14.536 "state": "waiting", 00:30:14.536 "run_count": 0, 00:30:14.536 "busy_count": 0, 00:30:14.536 "period_ticks": 10000000 00:30:14.536 } 00:30:14.536 ], 00:30:14.536 "paused_pollers": [] 00:30:14.536 }' 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:14.536 5000+0 records in 00:30:14.536 5000+0 records out 00:30:14.536 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0273439 s, 374 MB/s 00:30:14.536 19:14:29 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:30:14.795 AIO0 00:30:14.795 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:15.055 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:30:15.055 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:30:15.055 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:30:15.055 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:15.055 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:15.055 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:15.055 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:30:15.055 "name": "app_thread", 00:30:15.055 "id": 1, 00:30:15.055 "active_pollers": [], 00:30:15.055 "timed_pollers": [ 00:30:15.055 { 00:30:15.055 "name": "rpc_subsystem_poll_servers", 00:30:15.055 "id": 1, 00:30:15.055 "state": "waiting", 00:30:15.055 "run_count": 0, 00:30:15.055 "busy_count": 0, 00:30:15.055 "period_ticks": 10000000 00:30:15.055 } 00:30:15.055 ], 00:30:15.055 "paused_pollers": [] 00:30:15.055 }' 00:30:15.055 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:30:15.314 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:30:15.314 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:30:15.314 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:30:15.314 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:30:15.314 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:30:15.315 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:30:15.315 19:14:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 1818402 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@949 -- # '[' -z 1818402 ']' 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@953 -- # kill -0 1818402 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@954 -- # uname 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1818402 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1818402' 00:30:15.315 killing process with pid 1818402 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@968 -- # kill 1818402 00:30:15.315 19:14:29 reap_unregistered_poller -- common/autotest_common.sh@973 -- # wait 1818402 00:30:15.574 19:14:30 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:30:15.574 19:14:30 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:30:15.574 00:30:15.574 real 0m2.357s 00:30:15.574 user 0m1.447s 00:30:15.574 sys 0m0.655s 00:30:15.574 19:14:30 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:15.574 19:14:30 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:30:15.574 ************************************ 00:30:15.574 END TEST reap_unregistered_poller 00:30:15.574 ************************************ 00:30:15.574 19:14:30 -- spdk/autotest.sh@198 -- # uname -s 00:30:15.574 19:14:30 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:30:15.574 19:14:30 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:30:15.574 19:14:30 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:30:15.574 19:14:30 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:30:15.574 19:14:30 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:15.574 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:30:15.574 19:14:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:30:15.574 19:14:30 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:30:15.574 19:14:30 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:15.574 19:14:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:15.574 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:30:15.574 ************************************ 00:30:15.574 START TEST compress_compdev 00:30:15.574 ************************************ 00:30:15.574 19:14:30 compress_compdev -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:30:15.834 * Looking for test storage... 00:30:15.834 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:30:15.834 19:14:30 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:15.834 19:14:30 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.834 19:14:30 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.834 19:14:30 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.834 19:14:30 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.834 19:14:30 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.834 19:14:30 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.834 19:14:30 compress_compdev -- paths/export.sh@5 -- # export PATH 00:30:15.834 19:14:30 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.834 19:14:30 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.834 19:14:30 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:15.834 19:14:30 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:30:15.834 19:14:30 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:30:15.834 19:14:30 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:30:15.835 19:14:30 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:15.835 19:14:30 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1819013 00:30:15.835 19:14:30 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:15.835 19:14:30 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1819013 00:30:15.835 19:14:30 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:30:15.835 19:14:30 compress_compdev -- common/autotest_common.sh@830 -- # '[' -z 1819013 ']' 00:30:15.835 19:14:30 compress_compdev -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.835 19:14:30 compress_compdev -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:15.835 19:14:30 compress_compdev -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.835 19:14:30 compress_compdev -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:15.835 19:14:30 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:15.835 [2024-06-10 19:14:30.473532] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:15.835 [2024-06-10 19:14:30.473600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819013 ] 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:15.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:15.835 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:16.095 [2024-06-10 19:14:30.598865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:16.095 [2024-06-10 19:14:30.686452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.095 [2024-06-10 19:14:30.686457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.663 [2024-06-10 19:14:31.387010] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:16.922 19:14:31 compress_compdev -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:16.922 19:14:31 compress_compdev -- common/autotest_common.sh@863 -- # return 0 00:30:16.922 19:14:31 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:30:16.922 19:14:31 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:16.922 19:14:31 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:20.213 [2024-06-10 19:14:34.538170] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1187800 PMD being used: compress_qat 00:30:20.213 19:14:34 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:20.213 19:14:34 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:20.472 [ 00:30:20.472 { 00:30:20.472 "name": "Nvme0n1", 00:30:20.472 "aliases": [ 00:30:20.472 "bd0442da-33ba-4353-bff1-f04d30a18798" 00:30:20.472 ], 00:30:20.472 "product_name": "NVMe disk", 00:30:20.472 "block_size": 512, 00:30:20.472 "num_blocks": 3125627568, 00:30:20.472 "uuid": "bd0442da-33ba-4353-bff1-f04d30a18798", 00:30:20.472 "assigned_rate_limits": { 00:30:20.472 "rw_ios_per_sec": 0, 00:30:20.472 "rw_mbytes_per_sec": 0, 00:30:20.472 "r_mbytes_per_sec": 0, 00:30:20.472 "w_mbytes_per_sec": 0 00:30:20.472 }, 00:30:20.472 "claimed": false, 00:30:20.472 "zoned": false, 00:30:20.472 "supported_io_types": { 00:30:20.472 "read": true, 00:30:20.472 "write": true, 00:30:20.472 "unmap": true, 00:30:20.472 "write_zeroes": true, 00:30:20.472 "flush": true, 00:30:20.472 "reset": true, 00:30:20.472 "compare": false, 00:30:20.472 "compare_and_write": false, 00:30:20.472 "abort": true, 00:30:20.472 "nvme_admin": true, 00:30:20.472 "nvme_io": true 00:30:20.472 }, 00:30:20.472 "driver_specific": { 00:30:20.472 "nvme": [ 00:30:20.472 { 00:30:20.472 "pci_address": "0000:d8:00.0", 00:30:20.472 "trid": { 00:30:20.472 "trtype": "PCIe", 00:30:20.472 "traddr": "0000:d8:00.0" 00:30:20.472 }, 00:30:20.472 "ctrlr_data": { 00:30:20.472 "cntlid": 0, 00:30:20.472 "vendor_id": "0x8086", 00:30:20.473 "model_number": "INTEL SSDPE2KE016T8", 00:30:20.473 "serial_number": "PHLN036005WL1P6AGN", 00:30:20.473 "firmware_revision": "VDV10184", 00:30:20.473 "oacs": { 00:30:20.473 "security": 0, 00:30:20.473 "format": 1, 00:30:20.473 "firmware": 1, 00:30:20.473 "ns_manage": 1 00:30:20.473 }, 00:30:20.473 "multi_ctrlr": false, 00:30:20.473 "ana_reporting": false 00:30:20.473 }, 00:30:20.473 "vs": { 00:30:20.473 "nvme_version": "1.2" 00:30:20.473 }, 00:30:20.473 "ns_data": { 00:30:20.473 "id": 1, 00:30:20.473 "can_share": false 00:30:20.473 } 00:30:20.473 } 00:30:20.473 ], 00:30:20.473 "mp_policy": "active_passive" 00:30:20.473 } 00:30:20.473 } 00:30:20.473 ] 00:30:20.473 19:14:35 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:20.473 19:14:35 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:20.473 [2024-06-10 19:14:35.203106] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xfec510 PMD being used: compress_qat 00:30:21.410 6cbecab0-107b-48b1-844e-5b1e38a8d45e 00:30:21.410 19:14:36 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:21.670 c2a46b9e-1769-4ee0-a98d-f818e4b3dad5 00:30:21.670 19:14:36 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:21.670 19:14:36 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:30:21.670 19:14:36 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:21.670 19:14:36 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:21.670 19:14:36 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:21.670 19:14:36 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:21.670 19:14:36 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:21.929 19:14:36 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:21.929 [ 00:30:21.929 { 00:30:21.929 "name": "c2a46b9e-1769-4ee0-a98d-f818e4b3dad5", 00:30:21.929 "aliases": [ 00:30:21.929 "lvs0/lv0" 00:30:21.929 ], 00:30:21.929 "product_name": "Logical Volume", 00:30:21.929 "block_size": 512, 00:30:21.929 "num_blocks": 204800, 00:30:21.929 "uuid": "c2a46b9e-1769-4ee0-a98d-f818e4b3dad5", 00:30:21.929 "assigned_rate_limits": { 00:30:21.929 "rw_ios_per_sec": 0, 00:30:21.929 "rw_mbytes_per_sec": 0, 00:30:21.929 "r_mbytes_per_sec": 0, 00:30:21.929 "w_mbytes_per_sec": 0 00:30:21.929 }, 00:30:21.929 "claimed": false, 00:30:21.929 "zoned": false, 00:30:21.929 "supported_io_types": { 00:30:21.929 "read": true, 00:30:21.929 "write": true, 00:30:21.929 "unmap": true, 00:30:21.929 "write_zeroes": true, 00:30:21.929 "flush": false, 00:30:21.929 "reset": true, 00:30:21.929 "compare": false, 00:30:21.929 "compare_and_write": false, 00:30:21.929 "abort": false, 00:30:21.929 "nvme_admin": false, 00:30:21.929 "nvme_io": false 00:30:21.929 }, 00:30:21.929 "driver_specific": { 00:30:21.929 "lvol": { 00:30:21.929 "lvol_store_uuid": "6cbecab0-107b-48b1-844e-5b1e38a8d45e", 00:30:21.929 "base_bdev": "Nvme0n1", 00:30:21.929 "thin_provision": true, 00:30:21.929 "num_allocated_clusters": 0, 00:30:21.929 "snapshot": false, 00:30:21.929 "clone": false, 00:30:21.929 "esnap_clone": false 00:30:21.929 } 00:30:21.929 } 00:30:21.929 } 00:30:21.929 ] 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:22.189 19:14:36 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:22.189 19:14:36 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:22.189 [2024-06-10 19:14:36.848604] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:22.189 COMP_lvs0/lv0 00:30:22.189 19:14:36 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:22.189 19:14:36 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.448 19:14:37 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:22.708 [ 00:30:22.708 { 00:30:22.708 "name": "COMP_lvs0/lv0", 00:30:22.708 "aliases": [ 00:30:22.708 "dff691d6-5288-51cf-b7e5-cc5b759f4422" 00:30:22.708 ], 00:30:22.708 "product_name": "compress", 00:30:22.708 "block_size": 512, 00:30:22.708 "num_blocks": 200704, 00:30:22.708 "uuid": "dff691d6-5288-51cf-b7e5-cc5b759f4422", 00:30:22.708 "assigned_rate_limits": { 00:30:22.708 "rw_ios_per_sec": 0, 00:30:22.708 "rw_mbytes_per_sec": 0, 00:30:22.708 "r_mbytes_per_sec": 0, 00:30:22.708 "w_mbytes_per_sec": 0 00:30:22.708 }, 00:30:22.708 "claimed": false, 00:30:22.708 "zoned": false, 00:30:22.708 "supported_io_types": { 00:30:22.708 "read": true, 00:30:22.708 "write": true, 00:30:22.708 "unmap": false, 00:30:22.708 "write_zeroes": true, 00:30:22.708 "flush": false, 00:30:22.708 "reset": false, 00:30:22.708 "compare": false, 00:30:22.708 "compare_and_write": false, 00:30:22.708 "abort": false, 00:30:22.708 "nvme_admin": false, 00:30:22.708 "nvme_io": false 00:30:22.708 }, 00:30:22.708 "driver_specific": { 00:30:22.708 "compress": { 00:30:22.708 "name": "COMP_lvs0/lv0", 00:30:22.708 "base_bdev_name": "c2a46b9e-1769-4ee0-a98d-f818e4b3dad5" 00:30:22.708 } 00:30:22.708 } 00:30:22.708 } 00:30:22.708 ] 00:30:22.708 19:14:37 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:22.708 19:14:37 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:22.708 [2024-06-10 19:14:37.450877] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fd7f41b15c0 PMD being used: compress_qat 00:30:22.708 [2024-06-10 19:14:37.453005] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1184710 PMD being used: compress_qat 00:30:22.708 Running I/O for 3 seconds... 00:30:26.000 00:30:26.000 Latency(us) 00:30:26.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.000 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:26.000 Verification LBA range: start 0x0 length 0x3100 00:30:26.000 COMP_lvs0/lv0 : 3.01 4092.53 15.99 0.00 0.00 7771.26 131.89 15309.21 00:30:26.000 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:26.000 Verification LBA range: start 0x3100 length 0x3100 00:30:26.000 COMP_lvs0/lv0 : 3.01 4179.64 16.33 0.00 0.00 7611.62 121.24 15938.36 00:30:26.000 =================================================================================================================== 00:30:26.000 Total : 8272.17 32.31 0.00 0.00 7690.55 121.24 15938.36 00:30:26.000 0 00:30:26.000 19:14:40 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:26.000 19:14:40 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:26.000 19:14:40 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:26.260 19:14:40 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:26.260 19:14:40 compress_compdev -- compress/compress.sh@78 -- # killprocess 1819013 00:30:26.260 19:14:40 compress_compdev -- common/autotest_common.sh@949 -- # '[' -z 1819013 ']' 00:30:26.260 19:14:40 compress_compdev -- common/autotest_common.sh@953 -- # kill -0 1819013 00:30:26.260 19:14:40 compress_compdev -- common/autotest_common.sh@954 -- # uname 00:30:26.260 19:14:40 compress_compdev -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:26.260 19:14:40 compress_compdev -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1819013 00:30:26.519 19:14:41 compress_compdev -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:26.519 19:14:41 compress_compdev -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:26.519 19:14:41 compress_compdev -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1819013' 00:30:26.519 killing process with pid 1819013 00:30:26.519 19:14:41 compress_compdev -- common/autotest_common.sh@968 -- # kill 1819013 00:30:26.519 Received shutdown signal, test time was about 3.000000 seconds 00:30:26.519 00:30:26.519 Latency(us) 00:30:26.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.519 =================================================================================================================== 00:30:26.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.519 19:14:41 compress_compdev -- common/autotest_common.sh@973 -- # wait 1819013 00:30:28.425 19:14:43 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:30:28.425 19:14:43 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:28.425 19:14:43 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1821140 00:30:28.425 19:14:43 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:28.425 19:14:43 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:30:28.425 19:14:43 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1821140 00:30:28.425 19:14:43 compress_compdev -- common/autotest_common.sh@830 -- # '[' -z 1821140 ']' 00:30:28.425 19:14:43 compress_compdev -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.425 19:14:43 compress_compdev -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:28.425 19:14:43 compress_compdev -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.425 19:14:43 compress_compdev -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:28.425 19:14:43 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:28.685 [2024-06-10 19:14:43.199801] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:28.685 [2024-06-10 19:14:43.199853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821140 ] 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:28.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:28.685 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:28.685 [2024-06-10 19:14:43.308753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:28.685 [2024-06-10 19:14:43.396745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.685 [2024-06-10 19:14:43.396751] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.624 [2024-06-10 19:14:44.093196] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:29.624 19:14:44 compress_compdev -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:29.624 19:14:44 compress_compdev -- common/autotest_common.sh@863 -- # return 0 00:30:29.624 19:14:44 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:30:29.624 19:14:44 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:29.625 19:14:44 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:32.979 [2024-06-10 19:14:47.250143] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a29800 PMD being used: compress_qat 00:30:32.979 19:14:47 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:32.979 19:14:47 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:30:32.979 19:14:47 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:32.979 19:14:47 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:32.979 19:14:47 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:32.979 19:14:47 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:32.979 19:14:47 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:32.980 19:14:47 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:32.980 [ 00:30:32.980 { 00:30:32.980 "name": "Nvme0n1", 00:30:32.980 "aliases": [ 00:30:32.980 "3e9d635d-d753-4ccf-ba36-60d5bde6a140" 00:30:32.980 ], 00:30:32.980 "product_name": "NVMe disk", 00:30:32.980 "block_size": 512, 00:30:32.980 "num_blocks": 3125627568, 00:30:32.980 "uuid": "3e9d635d-d753-4ccf-ba36-60d5bde6a140", 00:30:32.980 "assigned_rate_limits": { 00:30:32.980 "rw_ios_per_sec": 0, 00:30:32.980 "rw_mbytes_per_sec": 0, 00:30:32.980 "r_mbytes_per_sec": 0, 00:30:32.980 "w_mbytes_per_sec": 0 00:30:32.980 }, 00:30:32.980 "claimed": false, 00:30:32.980 "zoned": false, 00:30:32.980 "supported_io_types": { 00:30:32.980 "read": true, 00:30:32.980 "write": true, 00:30:32.980 "unmap": true, 00:30:32.980 "write_zeroes": true, 00:30:32.980 "flush": true, 00:30:32.980 "reset": true, 00:30:32.980 "compare": false, 00:30:32.980 "compare_and_write": false, 00:30:32.980 "abort": true, 00:30:32.980 "nvme_admin": true, 00:30:32.980 "nvme_io": true 00:30:32.980 }, 00:30:32.980 "driver_specific": { 00:30:32.980 "nvme": [ 00:30:32.980 { 00:30:32.980 "pci_address": "0000:d8:00.0", 00:30:32.980 "trid": { 00:30:32.980 "trtype": "PCIe", 00:30:32.980 "traddr": "0000:d8:00.0" 00:30:32.980 }, 00:30:32.980 "ctrlr_data": { 00:30:32.980 "cntlid": 0, 00:30:32.980 "vendor_id": "0x8086", 00:30:32.980 "model_number": "INTEL SSDPE2KE016T8", 00:30:32.980 "serial_number": "PHLN036005WL1P6AGN", 00:30:32.980 "firmware_revision": "VDV10184", 00:30:32.980 "oacs": { 00:30:32.980 "security": 0, 00:30:32.980 "format": 1, 00:30:32.980 "firmware": 1, 00:30:32.980 "ns_manage": 1 00:30:32.980 }, 00:30:32.980 "multi_ctrlr": false, 00:30:32.980 "ana_reporting": false 00:30:32.980 }, 00:30:32.980 "vs": { 00:30:32.980 "nvme_version": "1.2" 00:30:32.980 }, 00:30:32.980 "ns_data": { 00:30:32.980 "id": 1, 00:30:32.980 "can_share": false 00:30:32.980 } 00:30:32.980 } 00:30:32.980 ], 00:30:32.980 "mp_policy": "active_passive" 00:30:32.980 } 00:30:32.980 } 00:30:32.980 ] 00:30:33.247 19:14:47 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:33.247 19:14:47 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:33.247 [2024-06-10 19:14:47.951223] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x188e510 PMD being used: compress_qat 00:30:34.189 aec536b5-8df8-4b49-9866-f2e3e4fa540f 00:30:34.189 19:14:48 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:34.448 5a9e0b2f-0318-4b71-8b3f-9da416ef806f 00:30:34.448 19:14:49 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:34.448 19:14:49 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:30:34.448 19:14:49 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:34.448 19:14:49 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:34.448 19:14:49 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:34.448 19:14:49 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:34.448 19:14:49 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:34.707 19:14:49 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:34.707 [ 00:30:34.707 { 00:30:34.707 "name": "5a9e0b2f-0318-4b71-8b3f-9da416ef806f", 00:30:34.707 "aliases": [ 00:30:34.707 "lvs0/lv0" 00:30:34.707 ], 00:30:34.707 "product_name": "Logical Volume", 00:30:34.707 "block_size": 512, 00:30:34.707 "num_blocks": 204800, 00:30:34.707 "uuid": "5a9e0b2f-0318-4b71-8b3f-9da416ef806f", 00:30:34.707 "assigned_rate_limits": { 00:30:34.707 "rw_ios_per_sec": 0, 00:30:34.707 "rw_mbytes_per_sec": 0, 00:30:34.707 "r_mbytes_per_sec": 0, 00:30:34.707 "w_mbytes_per_sec": 0 00:30:34.707 }, 00:30:34.707 "claimed": false, 00:30:34.707 "zoned": false, 00:30:34.707 "supported_io_types": { 00:30:34.707 "read": true, 00:30:34.707 "write": true, 00:30:34.707 "unmap": true, 00:30:34.707 "write_zeroes": true, 00:30:34.707 "flush": false, 00:30:34.707 "reset": true, 00:30:34.707 "compare": false, 00:30:34.707 "compare_and_write": false, 00:30:34.707 "abort": false, 00:30:34.707 "nvme_admin": false, 00:30:34.707 "nvme_io": false 00:30:34.707 }, 00:30:34.707 "driver_specific": { 00:30:34.707 "lvol": { 00:30:34.707 "lvol_store_uuid": "aec536b5-8df8-4b49-9866-f2e3e4fa540f", 00:30:34.707 "base_bdev": "Nvme0n1", 00:30:34.707 "thin_provision": true, 00:30:34.707 "num_allocated_clusters": 0, 00:30:34.707 "snapshot": false, 00:30:34.707 "clone": false, 00:30:34.707 "esnap_clone": false 00:30:34.707 } 00:30:34.707 } 00:30:34.707 } 00:30:34.707 ] 00:30:34.707 19:14:49 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:34.707 19:14:49 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:30:34.707 19:14:49 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:30:34.966 [2024-06-10 19:14:49.649587] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:34.967 COMP_lvs0/lv0 00:30:34.967 19:14:49 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:34.967 19:14:49 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:30:34.967 19:14:49 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:34.967 19:14:49 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:34.967 19:14:49 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:34.967 19:14:49 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:34.967 19:14:49 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:35.226 19:14:49 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:35.485 [ 00:30:35.485 { 00:30:35.485 "name": "COMP_lvs0/lv0", 00:30:35.485 "aliases": [ 00:30:35.485 "6a40de60-8b2c-50fd-95c1-36e2d0994271" 00:30:35.485 ], 00:30:35.485 "product_name": "compress", 00:30:35.485 "block_size": 512, 00:30:35.485 "num_blocks": 200704, 00:30:35.485 "uuid": "6a40de60-8b2c-50fd-95c1-36e2d0994271", 00:30:35.485 "assigned_rate_limits": { 00:30:35.485 "rw_ios_per_sec": 0, 00:30:35.485 "rw_mbytes_per_sec": 0, 00:30:35.485 "r_mbytes_per_sec": 0, 00:30:35.485 "w_mbytes_per_sec": 0 00:30:35.485 }, 00:30:35.485 "claimed": false, 00:30:35.485 "zoned": false, 00:30:35.485 "supported_io_types": { 00:30:35.485 "read": true, 00:30:35.485 "write": true, 00:30:35.485 "unmap": false, 00:30:35.485 "write_zeroes": true, 00:30:35.485 "flush": false, 00:30:35.485 "reset": false, 00:30:35.485 "compare": false, 00:30:35.485 "compare_and_write": false, 00:30:35.485 "abort": false, 00:30:35.485 "nvme_admin": false, 00:30:35.485 "nvme_io": false 00:30:35.485 }, 00:30:35.485 "driver_specific": { 00:30:35.485 "compress": { 00:30:35.485 "name": "COMP_lvs0/lv0", 00:30:35.485 "base_bdev_name": "5a9e0b2f-0318-4b71-8b3f-9da416ef806f" 00:30:35.485 } 00:30:35.485 } 00:30:35.485 } 00:30:35.485 ] 00:30:35.485 19:14:50 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:35.485 19:14:50 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:35.745 [2024-06-10 19:14:50.263945] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f53c01b15c0 PMD being used: compress_qat 00:30:35.745 [2024-06-10 19:14:50.265873] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a26710 PMD being used: compress_qat 00:30:35.745 Running I/O for 3 seconds... 00:30:39.034 00:30:39.034 Latency(us) 00:30:39.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.034 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:39.034 Verification LBA range: start 0x0 length 0x3100 00:30:39.034 COMP_lvs0/lv0 : 3.01 4148.47 16.20 0.00 0.00 7658.32 130.25 12792.63 00:30:39.034 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:39.034 Verification LBA range: start 0x3100 length 0x3100 00:30:39.034 COMP_lvs0/lv0 : 3.01 4228.57 16.52 0.00 0.00 7518.62 121.24 13631.49 00:30:39.034 =================================================================================================================== 00:30:39.034 Total : 8377.04 32.72 0.00 0.00 7587.76 121.24 13631.49 00:30:39.034 0 00:30:39.034 19:14:53 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:39.034 19:14:53 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:39.034 19:14:53 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:39.034 19:14:53 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:39.034 19:14:53 compress_compdev -- compress/compress.sh@78 -- # killprocess 1821140 00:30:39.034 19:14:53 compress_compdev -- common/autotest_common.sh@949 -- # '[' -z 1821140 ']' 00:30:39.034 19:14:53 compress_compdev -- common/autotest_common.sh@953 -- # kill -0 1821140 00:30:39.034 19:14:53 compress_compdev -- common/autotest_common.sh@954 -- # uname 00:30:39.034 19:14:53 compress_compdev -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:39.034 19:14:53 compress_compdev -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1821140 00:30:39.294 19:14:53 compress_compdev -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:39.294 19:14:53 compress_compdev -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:39.294 19:14:53 compress_compdev -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1821140' 00:30:39.294 killing process with pid 1821140 00:30:39.294 19:14:53 compress_compdev -- common/autotest_common.sh@968 -- # kill 1821140 00:30:39.294 Received shutdown signal, test time was about 3.000000 seconds 00:30:39.294 00:30:39.294 Latency(us) 00:30:39.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.294 =================================================================================================================== 00:30:39.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:39.294 19:14:53 compress_compdev -- common/autotest_common.sh@973 -- # wait 1821140 00:30:41.200 19:14:55 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:30:41.200 19:14:55 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:41.200 19:14:55 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1823255 00:30:41.200 19:14:55 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:41.200 19:14:55 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:30:41.200 19:14:55 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1823255 00:30:41.200 19:14:55 compress_compdev -- common/autotest_common.sh@830 -- # '[' -z 1823255 ']' 00:30:41.200 19:14:55 compress_compdev -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.200 19:14:55 compress_compdev -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:41.200 19:14:55 compress_compdev -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.200 19:14:55 compress_compdev -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:41.200 19:14:55 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:41.459 [2024-06-10 19:14:55.979173] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:41.459 [2024-06-10 19:14:55.979237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1823255 ] 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:41.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:41.459 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:41.459 [2024-06-10 19:14:56.101812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:41.459 [2024-06-10 19:14:56.186903] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.459 [2024-06-10 19:14:56.186909] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.395 [2024-06-10 19:14:56.879768] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:42.395 19:14:56 compress_compdev -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:42.396 19:14:56 compress_compdev -- common/autotest_common.sh@863 -- # return 0 00:30:42.396 19:14:56 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:30:42.396 19:14:56 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:42.396 19:14:56 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:45.684 [2024-06-10 19:15:00.034191] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x18b7800 PMD being used: compress_qat 00:30:45.684 19:15:00 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:45.684 19:15:00 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:45.942 [ 00:30:45.942 { 00:30:45.942 "name": "Nvme0n1", 00:30:45.942 "aliases": [ 00:30:45.942 "59a246b3-c2c5-4094-8e15-cfbf2b29ab0d" 00:30:45.942 ], 00:30:45.942 "product_name": "NVMe disk", 00:30:45.942 "block_size": 512, 00:30:45.942 "num_blocks": 3125627568, 00:30:45.942 "uuid": "59a246b3-c2c5-4094-8e15-cfbf2b29ab0d", 00:30:45.942 "assigned_rate_limits": { 00:30:45.942 "rw_ios_per_sec": 0, 00:30:45.942 "rw_mbytes_per_sec": 0, 00:30:45.942 "r_mbytes_per_sec": 0, 00:30:45.942 "w_mbytes_per_sec": 0 00:30:45.942 }, 00:30:45.942 "claimed": false, 00:30:45.942 "zoned": false, 00:30:45.942 "supported_io_types": { 00:30:45.942 "read": true, 00:30:45.942 "write": true, 00:30:45.942 "unmap": true, 00:30:45.942 "write_zeroes": true, 00:30:45.942 "flush": true, 00:30:45.942 "reset": true, 00:30:45.942 "compare": false, 00:30:45.942 "compare_and_write": false, 00:30:45.942 "abort": true, 00:30:45.942 "nvme_admin": true, 00:30:45.942 "nvme_io": true 00:30:45.942 }, 00:30:45.942 "driver_specific": { 00:30:45.942 "nvme": [ 00:30:45.942 { 00:30:45.942 "pci_address": "0000:d8:00.0", 00:30:45.942 "trid": { 00:30:45.942 "trtype": "PCIe", 00:30:45.942 "traddr": "0000:d8:00.0" 00:30:45.942 }, 00:30:45.942 "ctrlr_data": { 00:30:45.942 "cntlid": 0, 00:30:45.942 "vendor_id": "0x8086", 00:30:45.942 "model_number": "INTEL SSDPE2KE016T8", 00:30:45.942 "serial_number": "PHLN036005WL1P6AGN", 00:30:45.942 "firmware_revision": "VDV10184", 00:30:45.942 "oacs": { 00:30:45.942 "security": 0, 00:30:45.942 "format": 1, 00:30:45.942 "firmware": 1, 00:30:45.942 "ns_manage": 1 00:30:45.942 }, 00:30:45.942 "multi_ctrlr": false, 00:30:45.942 "ana_reporting": false 00:30:45.942 }, 00:30:45.942 "vs": { 00:30:45.942 "nvme_version": "1.2" 00:30:45.942 }, 00:30:45.942 "ns_data": { 00:30:45.942 "id": 1, 00:30:45.942 "can_share": false 00:30:45.942 } 00:30:45.942 } 00:30:45.942 ], 00:30:45.942 "mp_policy": "active_passive" 00:30:45.942 } 00:30:45.942 } 00:30:45.942 ] 00:30:45.942 19:15:00 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:45.942 19:15:00 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:46.201 [2024-06-10 19:15:00.711407] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x171c510 PMD being used: compress_qat 00:30:46.768 81f80dc2-6f2a-4ea6-b02b-54e3c5782330 00:30:47.027 19:15:01 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:47.027 514ca025-f856-4225-9b69-a415d2b7d0d6 00:30:47.285 19:15:01 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:47.286 19:15:01 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:30:47.286 19:15:01 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:47.286 19:15:01 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:47.286 19:15:01 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:47.286 19:15:01 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:47.286 19:15:01 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:47.286 19:15:02 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:47.554 [ 00:30:47.555 { 00:30:47.555 "name": "514ca025-f856-4225-9b69-a415d2b7d0d6", 00:30:47.555 "aliases": [ 00:30:47.555 "lvs0/lv0" 00:30:47.555 ], 00:30:47.555 "product_name": "Logical Volume", 00:30:47.555 "block_size": 512, 00:30:47.555 "num_blocks": 204800, 00:30:47.555 "uuid": "514ca025-f856-4225-9b69-a415d2b7d0d6", 00:30:47.555 "assigned_rate_limits": { 00:30:47.555 "rw_ios_per_sec": 0, 00:30:47.555 "rw_mbytes_per_sec": 0, 00:30:47.555 "r_mbytes_per_sec": 0, 00:30:47.555 "w_mbytes_per_sec": 0 00:30:47.555 }, 00:30:47.555 "claimed": false, 00:30:47.555 "zoned": false, 00:30:47.555 "supported_io_types": { 00:30:47.555 "read": true, 00:30:47.555 "write": true, 00:30:47.555 "unmap": true, 00:30:47.555 "write_zeroes": true, 00:30:47.555 "flush": false, 00:30:47.555 "reset": true, 00:30:47.555 "compare": false, 00:30:47.555 "compare_and_write": false, 00:30:47.555 "abort": false, 00:30:47.555 "nvme_admin": false, 00:30:47.555 "nvme_io": false 00:30:47.555 }, 00:30:47.555 "driver_specific": { 00:30:47.555 "lvol": { 00:30:47.556 "lvol_store_uuid": "81f80dc2-6f2a-4ea6-b02b-54e3c5782330", 00:30:47.556 "base_bdev": "Nvme0n1", 00:30:47.556 "thin_provision": true, 00:30:47.556 "num_allocated_clusters": 0, 00:30:47.556 "snapshot": false, 00:30:47.556 "clone": false, 00:30:47.556 "esnap_clone": false 00:30:47.556 } 00:30:47.556 } 00:30:47.556 } 00:30:47.556 ] 00:30:47.556 19:15:02 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:47.556 19:15:02 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:30:47.556 19:15:02 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:30:47.822 [2024-06-10 19:15:02.426991] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:47.822 COMP_lvs0/lv0 00:30:47.822 19:15:02 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:47.822 19:15:02 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:30:47.822 19:15:02 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:47.822 19:15:02 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:47.822 19:15:02 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:47.822 19:15:02 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:47.822 19:15:02 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:48.081 19:15:02 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:48.341 [ 00:30:48.341 { 00:30:48.341 "name": "COMP_lvs0/lv0", 00:30:48.341 "aliases": [ 00:30:48.341 "8fe4c4db-b2dd-58a1-8e35-ae9fa14116aa" 00:30:48.341 ], 00:30:48.341 "product_name": "compress", 00:30:48.341 "block_size": 4096, 00:30:48.341 "num_blocks": 25088, 00:30:48.341 "uuid": "8fe4c4db-b2dd-58a1-8e35-ae9fa14116aa", 00:30:48.341 "assigned_rate_limits": { 00:30:48.341 "rw_ios_per_sec": 0, 00:30:48.341 "rw_mbytes_per_sec": 0, 00:30:48.341 "r_mbytes_per_sec": 0, 00:30:48.341 "w_mbytes_per_sec": 0 00:30:48.341 }, 00:30:48.341 "claimed": false, 00:30:48.341 "zoned": false, 00:30:48.341 "supported_io_types": { 00:30:48.341 "read": true, 00:30:48.341 "write": true, 00:30:48.341 "unmap": false, 00:30:48.341 "write_zeroes": true, 00:30:48.341 "flush": false, 00:30:48.341 "reset": false, 00:30:48.341 "compare": false, 00:30:48.341 "compare_and_write": false, 00:30:48.341 "abort": false, 00:30:48.341 "nvme_admin": false, 00:30:48.341 "nvme_io": false 00:30:48.341 }, 00:30:48.341 "driver_specific": { 00:30:48.341 "compress": { 00:30:48.341 "name": "COMP_lvs0/lv0", 00:30:48.341 "base_bdev_name": "514ca025-f856-4225-9b69-a415d2b7d0d6" 00:30:48.341 } 00:30:48.341 } 00:30:48.341 } 00:30:48.341 ] 00:30:48.341 19:15:02 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:48.341 19:15:02 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:48.341 [2024-06-10 19:15:02.985177] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f9f1c1b15c0 PMD being used: compress_qat 00:30:48.341 [2024-06-10 19:15:02.987055] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x18b4710 PMD being used: compress_qat 00:30:48.341 Running I/O for 3 seconds... 00:30:51.630 00:30:51.630 Latency(us) 00:30:51.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.630 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:51.630 Verification LBA range: start 0x0 length 0x3100 00:30:51.630 COMP_lvs0/lv0 : 3.00 3965.18 15.49 0.00 0.00 8021.31 176.13 14889.78 00:30:51.630 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:51.630 Verification LBA range: start 0x3100 length 0x3100 00:30:51.630 COMP_lvs0/lv0 : 3.01 4048.22 15.81 0.00 0.00 7865.76 166.30 15099.49 00:30:51.630 =================================================================================================================== 00:30:51.630 Total : 8013.40 31.30 0.00 0.00 7942.70 166.30 15099.49 00:30:51.630 0 00:30:51.630 19:15:06 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:51.630 19:15:06 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:51.630 19:15:06 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:51.889 19:15:06 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:51.889 19:15:06 compress_compdev -- compress/compress.sh@78 -- # killprocess 1823255 00:30:51.889 19:15:06 compress_compdev -- common/autotest_common.sh@949 -- # '[' -z 1823255 ']' 00:30:51.889 19:15:06 compress_compdev -- common/autotest_common.sh@953 -- # kill -0 1823255 00:30:51.889 19:15:06 compress_compdev -- common/autotest_common.sh@954 -- # uname 00:30:51.889 19:15:06 compress_compdev -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:51.890 19:15:06 compress_compdev -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1823255 00:30:51.890 19:15:06 compress_compdev -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:51.890 19:15:06 compress_compdev -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:51.890 19:15:06 compress_compdev -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1823255' 00:30:51.890 killing process with pid 1823255 00:30:51.890 19:15:06 compress_compdev -- common/autotest_common.sh@968 -- # kill 1823255 00:30:51.890 Received shutdown signal, test time was about 3.000000 seconds 00:30:51.890 00:30:51.890 Latency(us) 00:30:51.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.890 =================================================================================================================== 00:30:51.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.890 19:15:06 compress_compdev -- common/autotest_common.sh@973 -- # wait 1823255 00:30:54.426 19:15:08 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:30:54.426 19:15:08 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:54.426 19:15:08 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=1825869 00:30:54.426 19:15:08 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:54.426 19:15:08 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:30:54.426 19:15:08 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 1825869 00:30:54.426 19:15:08 compress_compdev -- common/autotest_common.sh@830 -- # '[' -z 1825869 ']' 00:30:54.426 19:15:08 compress_compdev -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.426 19:15:08 compress_compdev -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:54.426 19:15:08 compress_compdev -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.426 19:15:08 compress_compdev -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:54.426 19:15:08 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:54.426 [2024-06-10 19:15:08.602571] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:30:54.426 [2024-06-10 19:15:08.602626] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825869 ] 00:30:54.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.426 EAL: Requested device 0000:b6:01.0 cannot be used 00:30:54.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.426 EAL: Requested device 0000:b6:01.1 cannot be used 00:30:54.426 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.426 EAL: Requested device 0000:b6:01.2 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:01.3 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:01.4 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:01.5 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:01.6 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:01.7 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.0 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.1 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.2 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.3 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.4 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.5 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.6 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b6:02.7 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.0 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.1 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.2 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.3 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.4 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.5 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.6 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:01.7 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.0 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.1 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.2 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.3 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.4 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.5 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.6 cannot be used 00:30:54.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.427 EAL: Requested device 0000:b8:02.7 cannot be used 00:30:54.427 [2024-06-10 19:15:08.720640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:54.427 [2024-06-10 19:15:08.809505] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.427 [2024-06-10 19:15:08.809605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.427 [2024-06-10 19:15:08.809609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.995 [2024-06-10 19:15:09.501897] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:54.995 19:15:09 compress_compdev -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:54.995 19:15:09 compress_compdev -- common/autotest_common.sh@863 -- # return 0 00:30:54.995 19:15:09 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:30:54.995 19:15:09 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:54.995 19:15:09 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:58.299 [2024-06-10 19:15:12.658906] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f23200 PMD being used: compress_qat 00:30:58.299 19:15:12 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:58.299 19:15:12 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:58.558 [ 00:30:58.558 { 00:30:58.558 "name": "Nvme0n1", 00:30:58.558 "aliases": [ 00:30:58.558 "dc06806b-bda1-4f6e-9ea4-41bf4f66c5de" 00:30:58.558 ], 00:30:58.558 "product_name": "NVMe disk", 00:30:58.558 "block_size": 512, 00:30:58.558 "num_blocks": 3125627568, 00:30:58.558 "uuid": "dc06806b-bda1-4f6e-9ea4-41bf4f66c5de", 00:30:58.558 "assigned_rate_limits": { 00:30:58.558 "rw_ios_per_sec": 0, 00:30:58.558 "rw_mbytes_per_sec": 0, 00:30:58.558 "r_mbytes_per_sec": 0, 00:30:58.558 "w_mbytes_per_sec": 0 00:30:58.558 }, 00:30:58.558 "claimed": false, 00:30:58.558 "zoned": false, 00:30:58.558 "supported_io_types": { 00:30:58.558 "read": true, 00:30:58.558 "write": true, 00:30:58.558 "unmap": true, 00:30:58.558 "write_zeroes": true, 00:30:58.558 "flush": true, 00:30:58.558 "reset": true, 00:30:58.558 "compare": false, 00:30:58.558 "compare_and_write": false, 00:30:58.558 "abort": true, 00:30:58.558 "nvme_admin": true, 00:30:58.558 "nvme_io": true 00:30:58.558 }, 00:30:58.558 "driver_specific": { 00:30:58.558 "nvme": [ 00:30:58.558 { 00:30:58.558 "pci_address": "0000:d8:00.0", 00:30:58.558 "trid": { 00:30:58.558 "trtype": "PCIe", 00:30:58.558 "traddr": "0000:d8:00.0" 00:30:58.558 }, 00:30:58.558 "ctrlr_data": { 00:30:58.558 "cntlid": 0, 00:30:58.558 "vendor_id": "0x8086", 00:30:58.558 "model_number": "INTEL SSDPE2KE016T8", 00:30:58.558 "serial_number": "PHLN036005WL1P6AGN", 00:30:58.558 "firmware_revision": "VDV10184", 00:30:58.558 "oacs": { 00:30:58.558 "security": 0, 00:30:58.558 "format": 1, 00:30:58.558 "firmware": 1, 00:30:58.558 "ns_manage": 1 00:30:58.559 }, 00:30:58.559 "multi_ctrlr": false, 00:30:58.559 "ana_reporting": false 00:30:58.559 }, 00:30:58.559 "vs": { 00:30:58.559 "nvme_version": "1.2" 00:30:58.559 }, 00:30:58.559 "ns_data": { 00:30:58.559 "id": 1, 00:30:58.559 "can_share": false 00:30:58.559 } 00:30:58.559 } 00:30:58.559 ], 00:30:58.559 "mp_policy": "active_passive" 00:30:58.559 } 00:30:58.559 } 00:30:58.559 ] 00:30:58.559 19:15:13 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:30:58.559 19:15:13 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:58.817 [2024-06-10 19:15:13.376020] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d71620 PMD being used: compress_qat 00:30:59.755 766dd960-b6bb-4262-b0a5-b0a4f083e572 00:30:59.755 19:15:14 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:59.755 51cf018c-f1e5-47c5-8da2-e32067dc0a4c 00:30:59.755 19:15:14 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:59.755 19:15:14 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:30:59.755 19:15:14 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:30:59.755 19:15:14 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:30:59.755 19:15:14 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:30:59.755 19:15:14 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:30:59.755 19:15:14 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:00.014 19:15:14 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:00.274 [ 00:31:00.274 { 00:31:00.274 "name": "51cf018c-f1e5-47c5-8da2-e32067dc0a4c", 00:31:00.274 "aliases": [ 00:31:00.274 "lvs0/lv0" 00:31:00.274 ], 00:31:00.274 "product_name": "Logical Volume", 00:31:00.274 "block_size": 512, 00:31:00.274 "num_blocks": 204800, 00:31:00.274 "uuid": "51cf018c-f1e5-47c5-8da2-e32067dc0a4c", 00:31:00.274 "assigned_rate_limits": { 00:31:00.274 "rw_ios_per_sec": 0, 00:31:00.274 "rw_mbytes_per_sec": 0, 00:31:00.274 "r_mbytes_per_sec": 0, 00:31:00.274 "w_mbytes_per_sec": 0 00:31:00.274 }, 00:31:00.274 "claimed": false, 00:31:00.274 "zoned": false, 00:31:00.274 "supported_io_types": { 00:31:00.274 "read": true, 00:31:00.274 "write": true, 00:31:00.274 "unmap": true, 00:31:00.274 "write_zeroes": true, 00:31:00.274 "flush": false, 00:31:00.274 "reset": true, 00:31:00.274 "compare": false, 00:31:00.274 "compare_and_write": false, 00:31:00.274 "abort": false, 00:31:00.274 "nvme_admin": false, 00:31:00.274 "nvme_io": false 00:31:00.274 }, 00:31:00.274 "driver_specific": { 00:31:00.274 "lvol": { 00:31:00.274 "lvol_store_uuid": "766dd960-b6bb-4262-b0a5-b0a4f083e572", 00:31:00.274 "base_bdev": "Nvme0n1", 00:31:00.274 "thin_provision": true, 00:31:00.274 "num_allocated_clusters": 0, 00:31:00.274 "snapshot": false, 00:31:00.274 "clone": false, 00:31:00.274 "esnap_clone": false 00:31:00.274 } 00:31:00.274 } 00:31:00.274 } 00:31:00.274 ] 00:31:00.274 19:15:14 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:31:00.274 19:15:14 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:00.274 19:15:14 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:00.566 [2024-06-10 19:15:15.118527] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:00.566 COMP_lvs0/lv0 00:31:00.566 19:15:15 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:00.566 19:15:15 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:31:00.566 19:15:15 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:00.566 19:15:15 compress_compdev -- common/autotest_common.sh@900 -- # local i 00:31:00.566 19:15:15 compress_compdev -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:00.566 19:15:15 compress_compdev -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:00.566 19:15:15 compress_compdev -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:00.864 19:15:15 compress_compdev -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:00.864 [ 00:31:00.864 { 00:31:00.864 "name": "COMP_lvs0/lv0", 00:31:00.864 "aliases": [ 00:31:00.864 "0460bcea-2dd0-53ec-9e82-49bbf1952957" 00:31:00.864 ], 00:31:00.864 "product_name": "compress", 00:31:00.864 "block_size": 512, 00:31:00.864 "num_blocks": 200704, 00:31:00.864 "uuid": "0460bcea-2dd0-53ec-9e82-49bbf1952957", 00:31:00.864 "assigned_rate_limits": { 00:31:00.864 "rw_ios_per_sec": 0, 00:31:00.864 "rw_mbytes_per_sec": 0, 00:31:00.864 "r_mbytes_per_sec": 0, 00:31:00.864 "w_mbytes_per_sec": 0 00:31:00.865 }, 00:31:00.865 "claimed": false, 00:31:00.865 "zoned": false, 00:31:00.865 "supported_io_types": { 00:31:00.865 "read": true, 00:31:00.865 "write": true, 00:31:00.865 "unmap": false, 00:31:00.865 "write_zeroes": true, 00:31:00.865 "flush": false, 00:31:00.865 "reset": false, 00:31:00.865 "compare": false, 00:31:00.865 "compare_and_write": false, 00:31:00.865 "abort": false, 00:31:00.865 "nvme_admin": false, 00:31:00.865 "nvme_io": false 00:31:00.865 }, 00:31:00.865 "driver_specific": { 00:31:00.865 "compress": { 00:31:00.865 "name": "COMP_lvs0/lv0", 00:31:00.865 "base_bdev_name": "51cf018c-f1e5-47c5-8da2-e32067dc0a4c" 00:31:00.865 } 00:31:00.865 } 00:31:00.865 } 00:31:00.865 ] 00:31:00.865 19:15:15 compress_compdev -- common/autotest_common.sh@906 -- # return 0 00:31:00.865 19:15:15 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:01.124 [2024-06-10 19:15:15.703680] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f1f881b1350 PMD being used: compress_qat 00:31:01.124 I/O targets: 00:31:01.124 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:31:01.124 00:31:01.124 00:31:01.124 CUnit - A unit testing framework for C - Version 2.1-3 00:31:01.124 http://cunit.sourceforge.net/ 00:31:01.124 00:31:01.124 00:31:01.124 Suite: bdevio tests on: COMP_lvs0/lv0 00:31:01.124 Test: blockdev write read block ...passed 00:31:01.124 Test: blockdev write zeroes read block ...passed 00:31:01.124 Test: blockdev write zeroes read no split ...passed 00:31:01.124 Test: blockdev write zeroes read split ...passed 00:31:01.124 Test: blockdev write zeroes read split partial ...passed 00:31:01.124 Test: blockdev reset ...[2024-06-10 19:15:15.748333] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:31:01.124 passed 00:31:01.124 Test: blockdev write read 8 blocks ...passed 00:31:01.124 Test: blockdev write read size > 128k ...passed 00:31:01.124 Test: blockdev write read invalid size ...passed 00:31:01.124 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:01.124 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:01.124 Test: blockdev write read max offset ...passed 00:31:01.124 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:01.124 Test: blockdev writev readv 8 blocks ...passed 00:31:01.124 Test: blockdev writev readv 30 x 1block ...passed 00:31:01.124 Test: blockdev writev readv block ...passed 00:31:01.124 Test: blockdev writev readv size > 128k ...passed 00:31:01.124 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:01.124 Test: blockdev comparev and writev ...passed 00:31:01.124 Test: blockdev nvme passthru rw ...passed 00:31:01.124 Test: blockdev nvme passthru vendor specific ...passed 00:31:01.124 Test: blockdev nvme admin passthru ...passed 00:31:01.124 Test: blockdev copy ...passed 00:31:01.124 00:31:01.124 Run Summary: Type Total Ran Passed Failed Inactive 00:31:01.124 suites 1 1 n/a 0 0 00:31:01.124 tests 23 23 23 0 0 00:31:01.124 asserts 130 130 130 0 n/a 00:31:01.124 00:31:01.124 Elapsed time = 0.165 seconds 00:31:01.124 0 00:31:01.124 19:15:15 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:31:01.124 19:15:15 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:01.383 19:15:16 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:01.643 19:15:16 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:31:01.643 19:15:16 compress_compdev -- compress/compress.sh@62 -- # killprocess 1825869 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@949 -- # '[' -z 1825869 ']' 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@953 -- # kill -0 1825869 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@954 -- # uname 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1825869 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1825869' 00:31:01.643 killing process with pid 1825869 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@968 -- # kill 1825869 00:31:01.643 19:15:16 compress_compdev -- common/autotest_common.sh@973 -- # wait 1825869 00:31:04.181 19:15:18 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:31:04.181 19:15:18 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:31:04.181 00:31:04.181 real 0m48.177s 00:31:04.181 user 1m48.846s 00:31:04.181 sys 0m5.494s 00:31:04.181 19:15:18 compress_compdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:04.181 19:15:18 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:31:04.181 ************************************ 00:31:04.181 END TEST compress_compdev 00:31:04.181 ************************************ 00:31:04.181 19:15:18 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:31:04.181 19:15:18 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:04.181 19:15:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:04.181 19:15:18 -- common/autotest_common.sh@10 -- # set +x 00:31:04.181 ************************************ 00:31:04.181 START TEST compress_isal 00:31:04.181 ************************************ 00:31:04.181 19:15:18 compress_isal -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:31:04.181 * Looking for test storage... 00:31:04.181 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:31:04.181 19:15:18 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.181 19:15:18 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:31:04.181 19:15:18 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.181 19:15:18 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:31:04.182 19:15:18 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.182 19:15:18 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.182 19:15:18 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.182 19:15:18 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.182 19:15:18 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.182 19:15:18 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.182 19:15:18 compress_isal -- paths/export.sh@5 -- # export PATH 00:31:04.182 19:15:18 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@47 -- # : 0 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.182 19:15:18 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1827725 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1827725 00:31:04.182 19:15:18 compress_isal -- common/autotest_common.sh@830 -- # '[' -z 1827725 ']' 00:31:04.182 19:15:18 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:31:04.182 19:15:18 compress_isal -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.182 19:15:18 compress_isal -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:04.182 19:15:18 compress_isal -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.182 19:15:18 compress_isal -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:04.182 19:15:18 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:04.182 [2024-06-10 19:15:18.731898] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:04.182 [2024-06-10 19:15:18.731958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827725 ] 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:04.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.182 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:04.182 [2024-06-10 19:15:18.854868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:04.442 [2024-06-10 19:15:18.938350] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.442 [2024-06-10 19:15:18.938355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.010 19:15:19 compress_isal -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:05.010 19:15:19 compress_isal -- common/autotest_common.sh@863 -- # return 0 00:31:05.010 19:15:19 compress_isal -- compress/compress.sh@74 -- # create_vols 00:31:05.010 19:15:19 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:05.010 19:15:19 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:08.299 19:15:22 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:08.299 19:15:22 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:08.559 [ 00:31:08.559 { 00:31:08.559 "name": "Nvme0n1", 00:31:08.559 "aliases": [ 00:31:08.559 "7dd5656d-76c5-42cf-b169-50fc3bccfad8" 00:31:08.559 ], 00:31:08.559 "product_name": "NVMe disk", 00:31:08.559 "block_size": 512, 00:31:08.559 "num_blocks": 3125627568, 00:31:08.559 "uuid": "7dd5656d-76c5-42cf-b169-50fc3bccfad8", 00:31:08.559 "assigned_rate_limits": { 00:31:08.559 "rw_ios_per_sec": 0, 00:31:08.559 "rw_mbytes_per_sec": 0, 00:31:08.559 "r_mbytes_per_sec": 0, 00:31:08.559 "w_mbytes_per_sec": 0 00:31:08.559 }, 00:31:08.559 "claimed": false, 00:31:08.559 "zoned": false, 00:31:08.559 "supported_io_types": { 00:31:08.559 "read": true, 00:31:08.559 "write": true, 00:31:08.559 "unmap": true, 00:31:08.559 "write_zeroes": true, 00:31:08.559 "flush": true, 00:31:08.559 "reset": true, 00:31:08.559 "compare": false, 00:31:08.559 "compare_and_write": false, 00:31:08.559 "abort": true, 00:31:08.559 "nvme_admin": true, 00:31:08.559 "nvme_io": true 00:31:08.559 }, 00:31:08.559 "driver_specific": { 00:31:08.559 "nvme": [ 00:31:08.559 { 00:31:08.559 "pci_address": "0000:d8:00.0", 00:31:08.559 "trid": { 00:31:08.559 "trtype": "PCIe", 00:31:08.559 "traddr": "0000:d8:00.0" 00:31:08.559 }, 00:31:08.559 "ctrlr_data": { 00:31:08.559 "cntlid": 0, 00:31:08.559 "vendor_id": "0x8086", 00:31:08.559 "model_number": "INTEL SSDPE2KE016T8", 00:31:08.559 "serial_number": "PHLN036005WL1P6AGN", 00:31:08.559 "firmware_revision": "VDV10184", 00:31:08.559 "oacs": { 00:31:08.559 "security": 0, 00:31:08.559 "format": 1, 00:31:08.559 "firmware": 1, 00:31:08.559 "ns_manage": 1 00:31:08.559 }, 00:31:08.559 "multi_ctrlr": false, 00:31:08.559 "ana_reporting": false 00:31:08.559 }, 00:31:08.559 "vs": { 00:31:08.559 "nvme_version": "1.2" 00:31:08.559 }, 00:31:08.559 "ns_data": { 00:31:08.559 "id": 1, 00:31:08.559 "can_share": false 00:31:08.559 } 00:31:08.559 } 00:31:08.559 ], 00:31:08.559 "mp_policy": "active_passive" 00:31:08.559 } 00:31:08.559 } 00:31:08.559 ] 00:31:08.559 19:15:23 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:08.559 19:15:23 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:09.497 6ab20396-6432-40bb-9c5a-1d3745dc200d 00:31:09.497 19:15:24 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:09.757 488fc866-6e0d-4245-9478-0bea2238dc26 00:31:09.757 19:15:24 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:09.757 19:15:24 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:31:09.757 19:15:24 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:09.757 19:15:24 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:09.757 19:15:24 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:09.757 19:15:24 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:09.757 19:15:24 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.016 19:15:24 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:10.275 [ 00:31:10.275 { 00:31:10.275 "name": "488fc866-6e0d-4245-9478-0bea2238dc26", 00:31:10.275 "aliases": [ 00:31:10.275 "lvs0/lv0" 00:31:10.275 ], 00:31:10.275 "product_name": "Logical Volume", 00:31:10.275 "block_size": 512, 00:31:10.275 "num_blocks": 204800, 00:31:10.275 "uuid": "488fc866-6e0d-4245-9478-0bea2238dc26", 00:31:10.275 "assigned_rate_limits": { 00:31:10.275 "rw_ios_per_sec": 0, 00:31:10.275 "rw_mbytes_per_sec": 0, 00:31:10.275 "r_mbytes_per_sec": 0, 00:31:10.275 "w_mbytes_per_sec": 0 00:31:10.275 }, 00:31:10.275 "claimed": false, 00:31:10.275 "zoned": false, 00:31:10.275 "supported_io_types": { 00:31:10.275 "read": true, 00:31:10.275 "write": true, 00:31:10.275 "unmap": true, 00:31:10.275 "write_zeroes": true, 00:31:10.275 "flush": false, 00:31:10.275 "reset": true, 00:31:10.275 "compare": false, 00:31:10.275 "compare_and_write": false, 00:31:10.275 "abort": false, 00:31:10.275 "nvme_admin": false, 00:31:10.275 "nvme_io": false 00:31:10.275 }, 00:31:10.275 "driver_specific": { 00:31:10.275 "lvol": { 00:31:10.275 "lvol_store_uuid": "6ab20396-6432-40bb-9c5a-1d3745dc200d", 00:31:10.275 "base_bdev": "Nvme0n1", 00:31:10.275 "thin_provision": true, 00:31:10.275 "num_allocated_clusters": 0, 00:31:10.275 "snapshot": false, 00:31:10.275 "clone": false, 00:31:10.275 "esnap_clone": false 00:31:10.275 } 00:31:10.275 } 00:31:10.275 } 00:31:10.275 ] 00:31:10.275 19:15:24 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:10.275 19:15:24 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:10.275 19:15:24 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:10.534 [2024-06-10 19:15:25.067031] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:10.534 COMP_lvs0/lv0 00:31:10.534 19:15:25 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:10.534 19:15:25 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:31:10.534 19:15:25 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:10.534 19:15:25 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:10.534 19:15:25 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:10.534 19:15:25 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:10.534 19:15:25 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.793 19:15:25 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:10.793 [ 00:31:10.793 { 00:31:10.793 "name": "COMP_lvs0/lv0", 00:31:10.793 "aliases": [ 00:31:10.793 "fe1cab02-1c43-52d3-a4e2-9a45266cecb4" 00:31:10.793 ], 00:31:10.793 "product_name": "compress", 00:31:10.793 "block_size": 512, 00:31:10.793 "num_blocks": 200704, 00:31:10.793 "uuid": "fe1cab02-1c43-52d3-a4e2-9a45266cecb4", 00:31:10.793 "assigned_rate_limits": { 00:31:10.793 "rw_ios_per_sec": 0, 00:31:10.793 "rw_mbytes_per_sec": 0, 00:31:10.793 "r_mbytes_per_sec": 0, 00:31:10.793 "w_mbytes_per_sec": 0 00:31:10.793 }, 00:31:10.793 "claimed": false, 00:31:10.793 "zoned": false, 00:31:10.793 "supported_io_types": { 00:31:10.793 "read": true, 00:31:10.793 "write": true, 00:31:10.793 "unmap": false, 00:31:10.793 "write_zeroes": true, 00:31:10.793 "flush": false, 00:31:10.793 "reset": false, 00:31:10.793 "compare": false, 00:31:10.793 "compare_and_write": false, 00:31:10.793 "abort": false, 00:31:10.793 "nvme_admin": false, 00:31:10.793 "nvme_io": false 00:31:10.793 }, 00:31:10.793 "driver_specific": { 00:31:10.793 "compress": { 00:31:10.793 "name": "COMP_lvs0/lv0", 00:31:10.793 "base_bdev_name": "488fc866-6e0d-4245-9478-0bea2238dc26" 00:31:10.793 } 00:31:10.793 } 00:31:10.794 } 00:31:10.794 ] 00:31:11.053 19:15:25 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:11.053 19:15:25 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:11.053 Running I/O for 3 seconds... 00:31:14.343 00:31:14.343 Latency(us) 00:31:14.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.343 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:14.343 Verification LBA range: start 0x0 length 0x3100 00:31:14.343 COMP_lvs0/lv0 : 3.01 3470.07 13.55 0.00 0.00 9161.36 56.52 16043.21 00:31:14.343 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:14.343 Verification LBA range: start 0x3100 length 0x3100 00:31:14.343 COMP_lvs0/lv0 : 3.01 3455.60 13.50 0.00 0.00 9200.95 56.52 16148.07 00:31:14.343 =================================================================================================================== 00:31:14.343 Total : 6925.67 27.05 0.00 0.00 9181.12 56.52 16148.07 00:31:14.343 0 00:31:14.343 19:15:28 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:14.343 19:15:28 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:14.343 19:15:28 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:14.602 19:15:29 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:14.602 19:15:29 compress_isal -- compress/compress.sh@78 -- # killprocess 1827725 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@949 -- # '[' -z 1827725 ']' 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@953 -- # kill -0 1827725 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@954 -- # uname 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1827725 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1827725' 00:31:14.602 killing process with pid 1827725 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@968 -- # kill 1827725 00:31:14.602 Received shutdown signal, test time was about 3.000000 seconds 00:31:14.602 00:31:14.602 Latency(us) 00:31:14.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.602 =================================================================================================================== 00:31:14.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.602 19:15:29 compress_isal -- common/autotest_common.sh@973 -- # wait 1827725 00:31:17.139 19:15:31 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:31:17.139 19:15:31 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:17.139 19:15:31 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1829806 00:31:17.139 19:15:31 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:17.139 19:15:31 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:31:17.139 19:15:31 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1829806 00:31:17.139 19:15:31 compress_isal -- common/autotest_common.sh@830 -- # '[' -z 1829806 ']' 00:31:17.139 19:15:31 compress_isal -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.139 19:15:31 compress_isal -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:17.139 19:15:31 compress_isal -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.139 19:15:31 compress_isal -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:17.139 19:15:31 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:17.139 [2024-06-10 19:15:31.369741] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:17.139 [2024-06-10 19:15:31.369805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829806 ] 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:17.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:17.139 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:17.139 [2024-06-10 19:15:31.492872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.139 [2024-06-10 19:15:31.575915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.139 [2024-06-10 19:15:31.575920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.707 19:15:32 compress_isal -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:17.707 19:15:32 compress_isal -- common/autotest_common.sh@863 -- # return 0 00:31:17.707 19:15:32 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:31:17.707 19:15:32 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:17.707 19:15:32 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:20.993 19:15:35 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:20.993 19:15:35 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:20.993 [ 00:31:20.993 { 00:31:20.993 "name": "Nvme0n1", 00:31:20.993 "aliases": [ 00:31:20.993 "bff4c951-08c5-4742-9256-c284ff516ff6" 00:31:20.993 ], 00:31:20.993 "product_name": "NVMe disk", 00:31:20.993 "block_size": 512, 00:31:20.993 "num_blocks": 3125627568, 00:31:20.993 "uuid": "bff4c951-08c5-4742-9256-c284ff516ff6", 00:31:20.993 "assigned_rate_limits": { 00:31:20.993 "rw_ios_per_sec": 0, 00:31:20.993 "rw_mbytes_per_sec": 0, 00:31:20.993 "r_mbytes_per_sec": 0, 00:31:20.993 "w_mbytes_per_sec": 0 00:31:20.993 }, 00:31:20.993 "claimed": false, 00:31:20.993 "zoned": false, 00:31:20.993 "supported_io_types": { 00:31:20.993 "read": true, 00:31:20.993 "write": true, 00:31:20.993 "unmap": true, 00:31:20.993 "write_zeroes": true, 00:31:20.993 "flush": true, 00:31:20.993 "reset": true, 00:31:20.993 "compare": false, 00:31:20.993 "compare_and_write": false, 00:31:20.993 "abort": true, 00:31:20.993 "nvme_admin": true, 00:31:20.993 "nvme_io": true 00:31:20.993 }, 00:31:20.993 "driver_specific": { 00:31:20.993 "nvme": [ 00:31:20.993 { 00:31:20.993 "pci_address": "0000:d8:00.0", 00:31:20.993 "trid": { 00:31:20.993 "trtype": "PCIe", 00:31:20.993 "traddr": "0000:d8:00.0" 00:31:20.993 }, 00:31:20.993 "ctrlr_data": { 00:31:20.993 "cntlid": 0, 00:31:20.993 "vendor_id": "0x8086", 00:31:20.993 "model_number": "INTEL SSDPE2KE016T8", 00:31:20.993 "serial_number": "PHLN036005WL1P6AGN", 00:31:20.993 "firmware_revision": "VDV10184", 00:31:20.993 "oacs": { 00:31:20.993 "security": 0, 00:31:20.993 "format": 1, 00:31:20.993 "firmware": 1, 00:31:20.993 "ns_manage": 1 00:31:20.993 }, 00:31:20.993 "multi_ctrlr": false, 00:31:20.993 "ana_reporting": false 00:31:20.993 }, 00:31:20.994 "vs": { 00:31:20.994 "nvme_version": "1.2" 00:31:20.994 }, 00:31:20.994 "ns_data": { 00:31:20.994 "id": 1, 00:31:20.994 "can_share": false 00:31:20.994 } 00:31:20.994 } 00:31:20.994 ], 00:31:20.994 "mp_policy": "active_passive" 00:31:20.994 } 00:31:20.994 } 00:31:20.994 ] 00:31:20.994 19:15:35 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:20.994 19:15:35 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:22.372 418bca72-fc2b-4278-b76a-cc39fb672465 00:31:22.372 19:15:36 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:22.372 914f9a11-50ae-4557-a444-0c5e8b6737e6 00:31:22.372 19:15:36 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:22.372 19:15:36 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:31:22.372 19:15:36 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:22.372 19:15:36 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:22.372 19:15:36 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:22.372 19:15:36 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:22.372 19:15:36 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:22.630 19:15:37 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:22.889 [ 00:31:22.889 { 00:31:22.889 "name": "914f9a11-50ae-4557-a444-0c5e8b6737e6", 00:31:22.889 "aliases": [ 00:31:22.889 "lvs0/lv0" 00:31:22.889 ], 00:31:22.889 "product_name": "Logical Volume", 00:31:22.889 "block_size": 512, 00:31:22.889 "num_blocks": 204800, 00:31:22.889 "uuid": "914f9a11-50ae-4557-a444-0c5e8b6737e6", 00:31:22.889 "assigned_rate_limits": { 00:31:22.889 "rw_ios_per_sec": 0, 00:31:22.889 "rw_mbytes_per_sec": 0, 00:31:22.889 "r_mbytes_per_sec": 0, 00:31:22.889 "w_mbytes_per_sec": 0 00:31:22.889 }, 00:31:22.889 "claimed": false, 00:31:22.889 "zoned": false, 00:31:22.889 "supported_io_types": { 00:31:22.889 "read": true, 00:31:22.889 "write": true, 00:31:22.889 "unmap": true, 00:31:22.889 "write_zeroes": true, 00:31:22.889 "flush": false, 00:31:22.889 "reset": true, 00:31:22.889 "compare": false, 00:31:22.889 "compare_and_write": false, 00:31:22.889 "abort": false, 00:31:22.889 "nvme_admin": false, 00:31:22.889 "nvme_io": false 00:31:22.889 }, 00:31:22.889 "driver_specific": { 00:31:22.889 "lvol": { 00:31:22.889 "lvol_store_uuid": "418bca72-fc2b-4278-b76a-cc39fb672465", 00:31:22.889 "base_bdev": "Nvme0n1", 00:31:22.889 "thin_provision": true, 00:31:22.889 "num_allocated_clusters": 0, 00:31:22.889 "snapshot": false, 00:31:22.889 "clone": false, 00:31:22.889 "esnap_clone": false 00:31:22.889 } 00:31:22.889 } 00:31:22.889 } 00:31:22.889 ] 00:31:22.890 19:15:37 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:22.890 19:15:37 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:31:22.890 19:15:37 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:31:22.890 [2024-06-10 19:15:37.644371] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:22.890 COMP_lvs0/lv0 00:31:23.148 19:15:37 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:23.148 19:15:37 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:23.407 [ 00:31:23.407 { 00:31:23.407 "name": "COMP_lvs0/lv0", 00:31:23.407 "aliases": [ 00:31:23.407 "e7e3e5fd-ee1f-57a4-87f1-40bbe5b172a6" 00:31:23.407 ], 00:31:23.407 "product_name": "compress", 00:31:23.407 "block_size": 512, 00:31:23.407 "num_blocks": 200704, 00:31:23.407 "uuid": "e7e3e5fd-ee1f-57a4-87f1-40bbe5b172a6", 00:31:23.407 "assigned_rate_limits": { 00:31:23.407 "rw_ios_per_sec": 0, 00:31:23.407 "rw_mbytes_per_sec": 0, 00:31:23.407 "r_mbytes_per_sec": 0, 00:31:23.407 "w_mbytes_per_sec": 0 00:31:23.407 }, 00:31:23.407 "claimed": false, 00:31:23.407 "zoned": false, 00:31:23.407 "supported_io_types": { 00:31:23.407 "read": true, 00:31:23.407 "write": true, 00:31:23.407 "unmap": false, 00:31:23.407 "write_zeroes": true, 00:31:23.407 "flush": false, 00:31:23.407 "reset": false, 00:31:23.407 "compare": false, 00:31:23.407 "compare_and_write": false, 00:31:23.407 "abort": false, 00:31:23.407 "nvme_admin": false, 00:31:23.407 "nvme_io": false 00:31:23.407 }, 00:31:23.407 "driver_specific": { 00:31:23.407 "compress": { 00:31:23.407 "name": "COMP_lvs0/lv0", 00:31:23.407 "base_bdev_name": "914f9a11-50ae-4557-a444-0c5e8b6737e6" 00:31:23.407 } 00:31:23.407 } 00:31:23.407 } 00:31:23.407 ] 00:31:23.407 19:15:38 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:23.407 19:15:38 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:23.666 Running I/O for 3 seconds... 00:31:26.954 00:31:26.954 Latency(us) 00:31:26.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.954 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:26.954 Verification LBA range: start 0x0 length 0x3100 00:31:26.954 COMP_lvs0/lv0 : 3.01 3433.99 13.41 0.00 0.00 9263.29 59.39 14575.21 00:31:26.954 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:26.954 Verification LBA range: start 0x3100 length 0x3100 00:31:26.954 COMP_lvs0/lv0 : 3.00 3482.96 13.61 0.00 0.00 9146.64 55.71 14680.06 00:31:26.954 =================================================================================================================== 00:31:26.954 Total : 6916.95 27.02 0.00 0.00 9204.56 55.71 14680.06 00:31:26.954 0 00:31:26.954 19:15:41 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:26.954 19:15:41 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:26.954 19:15:41 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:27.213 19:15:41 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:27.213 19:15:41 compress_isal -- compress/compress.sh@78 -- # killprocess 1829806 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@949 -- # '[' -z 1829806 ']' 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@953 -- # kill -0 1829806 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@954 -- # uname 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1829806 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1829806' 00:31:27.213 killing process with pid 1829806 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@968 -- # kill 1829806 00:31:27.213 Received shutdown signal, test time was about 3.000000 seconds 00:31:27.213 00:31:27.213 Latency(us) 00:31:27.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.213 =================================================================================================================== 00:31:27.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.213 19:15:41 compress_isal -- common/autotest_common.sh@973 -- # wait 1829806 00:31:29.151 19:15:43 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:31:29.151 19:15:43 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:29.151 19:15:43 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1831832 00:31:29.151 19:15:43 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:29.151 19:15:43 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:31:29.151 19:15:43 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1831832 00:31:29.151 19:15:43 compress_isal -- common/autotest_common.sh@830 -- # '[' -z 1831832 ']' 00:31:29.151 19:15:43 compress_isal -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.151 19:15:43 compress_isal -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:29.151 19:15:43 compress_isal -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.151 19:15:43 compress_isal -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:29.151 19:15:43 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:29.431 [2024-06-10 19:15:43.923781] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:29.431 [2024-06-10 19:15:43.923843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831832 ] 00:31:29.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.431 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:29.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.431 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:29.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.431 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:29.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.431 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:29.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.431 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:29.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:29.432 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:29.432 [2024-06-10 19:15:44.048233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:29.432 [2024-06-10 19:15:44.132801] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.432 [2024-06-10 19:15:44.132807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.369 19:15:44 compress_isal -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:30.369 19:15:44 compress_isal -- common/autotest_common.sh@863 -- # return 0 00:31:30.369 19:15:44 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:31:30.369 19:15:44 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:30.369 19:15:44 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:33.659 19:15:47 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:33.659 19:15:47 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:31:33.659 19:15:47 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:33.659 19:15:47 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:33.659 19:15:47 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:33.659 19:15:47 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:33.659 19:15:47 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:33.659 19:15:48 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:33.659 [ 00:31:33.659 { 00:31:33.659 "name": "Nvme0n1", 00:31:33.659 "aliases": [ 00:31:33.659 "1f2471ca-1c8f-4c97-b61c-e7c5e987479f" 00:31:33.659 ], 00:31:33.659 "product_name": "NVMe disk", 00:31:33.659 "block_size": 512, 00:31:33.659 "num_blocks": 3125627568, 00:31:33.659 "uuid": "1f2471ca-1c8f-4c97-b61c-e7c5e987479f", 00:31:33.659 "assigned_rate_limits": { 00:31:33.659 "rw_ios_per_sec": 0, 00:31:33.659 "rw_mbytes_per_sec": 0, 00:31:33.659 "r_mbytes_per_sec": 0, 00:31:33.659 "w_mbytes_per_sec": 0 00:31:33.659 }, 00:31:33.659 "claimed": false, 00:31:33.659 "zoned": false, 00:31:33.659 "supported_io_types": { 00:31:33.659 "read": true, 00:31:33.659 "write": true, 00:31:33.659 "unmap": true, 00:31:33.659 "write_zeroes": true, 00:31:33.659 "flush": true, 00:31:33.659 "reset": true, 00:31:33.659 "compare": false, 00:31:33.659 "compare_and_write": false, 00:31:33.659 "abort": true, 00:31:33.659 "nvme_admin": true, 00:31:33.659 "nvme_io": true 00:31:33.659 }, 00:31:33.659 "driver_specific": { 00:31:33.659 "nvme": [ 00:31:33.659 { 00:31:33.659 "pci_address": "0000:d8:00.0", 00:31:33.659 "trid": { 00:31:33.659 "trtype": "PCIe", 00:31:33.659 "traddr": "0000:d8:00.0" 00:31:33.659 }, 00:31:33.659 "ctrlr_data": { 00:31:33.659 "cntlid": 0, 00:31:33.659 "vendor_id": "0x8086", 00:31:33.660 "model_number": "INTEL SSDPE2KE016T8", 00:31:33.660 "serial_number": "PHLN036005WL1P6AGN", 00:31:33.660 "firmware_revision": "VDV10184", 00:31:33.660 "oacs": { 00:31:33.660 "security": 0, 00:31:33.660 "format": 1, 00:31:33.660 "firmware": 1, 00:31:33.660 "ns_manage": 1 00:31:33.660 }, 00:31:33.660 "multi_ctrlr": false, 00:31:33.660 "ana_reporting": false 00:31:33.660 }, 00:31:33.660 "vs": { 00:31:33.660 "nvme_version": "1.2" 00:31:33.660 }, 00:31:33.660 "ns_data": { 00:31:33.660 "id": 1, 00:31:33.660 "can_share": false 00:31:33.660 } 00:31:33.660 } 00:31:33.660 ], 00:31:33.660 "mp_policy": "active_passive" 00:31:33.660 } 00:31:33.660 } 00:31:33.660 ] 00:31:33.660 19:15:48 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:33.660 19:15:48 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:34.597 e806a347-c19c-4c6b-9eaf-48439b3d6fad 00:31:34.856 19:15:49 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:34.856 c47bb46a-258f-4226-b862-aace29d0c5d6 00:31:34.856 19:15:49 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:34.856 19:15:49 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:31:34.856 19:15:49 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:34.856 19:15:49 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:34.856 19:15:49 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:34.857 19:15:49 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:34.857 19:15:49 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:35.116 19:15:49 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:35.375 [ 00:31:35.375 { 00:31:35.375 "name": "c47bb46a-258f-4226-b862-aace29d0c5d6", 00:31:35.375 "aliases": [ 00:31:35.375 "lvs0/lv0" 00:31:35.375 ], 00:31:35.375 "product_name": "Logical Volume", 00:31:35.375 "block_size": 512, 00:31:35.375 "num_blocks": 204800, 00:31:35.375 "uuid": "c47bb46a-258f-4226-b862-aace29d0c5d6", 00:31:35.375 "assigned_rate_limits": { 00:31:35.375 "rw_ios_per_sec": 0, 00:31:35.375 "rw_mbytes_per_sec": 0, 00:31:35.375 "r_mbytes_per_sec": 0, 00:31:35.375 "w_mbytes_per_sec": 0 00:31:35.375 }, 00:31:35.375 "claimed": false, 00:31:35.375 "zoned": false, 00:31:35.375 "supported_io_types": { 00:31:35.375 "read": true, 00:31:35.375 "write": true, 00:31:35.375 "unmap": true, 00:31:35.375 "write_zeroes": true, 00:31:35.375 "flush": false, 00:31:35.375 "reset": true, 00:31:35.375 "compare": false, 00:31:35.375 "compare_and_write": false, 00:31:35.375 "abort": false, 00:31:35.375 "nvme_admin": false, 00:31:35.375 "nvme_io": false 00:31:35.375 }, 00:31:35.375 "driver_specific": { 00:31:35.375 "lvol": { 00:31:35.375 "lvol_store_uuid": "e806a347-c19c-4c6b-9eaf-48439b3d6fad", 00:31:35.375 "base_bdev": "Nvme0n1", 00:31:35.375 "thin_provision": true, 00:31:35.375 "num_allocated_clusters": 0, 00:31:35.375 "snapshot": false, 00:31:35.375 "clone": false, 00:31:35.375 "esnap_clone": false 00:31:35.375 } 00:31:35.375 } 00:31:35.375 } 00:31:35.375 ] 00:31:35.375 19:15:50 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:35.375 19:15:50 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:31:35.375 19:15:50 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:31:35.634 [2024-06-10 19:15:50.273777] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:35.634 COMP_lvs0/lv0 00:31:35.634 19:15:50 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:35.634 19:15:50 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:31:35.634 19:15:50 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:35.634 19:15:50 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:35.634 19:15:50 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:35.634 19:15:50 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:35.634 19:15:50 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:35.892 19:15:50 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:36.151 [ 00:31:36.151 { 00:31:36.151 "name": "COMP_lvs0/lv0", 00:31:36.151 "aliases": [ 00:31:36.151 "0856412e-e697-59cb-b6b7-3b5cdb344f6e" 00:31:36.151 ], 00:31:36.151 "product_name": "compress", 00:31:36.151 "block_size": 4096, 00:31:36.151 "num_blocks": 25088, 00:31:36.151 "uuid": "0856412e-e697-59cb-b6b7-3b5cdb344f6e", 00:31:36.151 "assigned_rate_limits": { 00:31:36.151 "rw_ios_per_sec": 0, 00:31:36.151 "rw_mbytes_per_sec": 0, 00:31:36.151 "r_mbytes_per_sec": 0, 00:31:36.151 "w_mbytes_per_sec": 0 00:31:36.151 }, 00:31:36.151 "claimed": false, 00:31:36.151 "zoned": false, 00:31:36.151 "supported_io_types": { 00:31:36.151 "read": true, 00:31:36.151 "write": true, 00:31:36.151 "unmap": false, 00:31:36.151 "write_zeroes": true, 00:31:36.151 "flush": false, 00:31:36.151 "reset": false, 00:31:36.151 "compare": false, 00:31:36.151 "compare_and_write": false, 00:31:36.151 "abort": false, 00:31:36.151 "nvme_admin": false, 00:31:36.151 "nvme_io": false 00:31:36.151 }, 00:31:36.151 "driver_specific": { 00:31:36.151 "compress": { 00:31:36.151 "name": "COMP_lvs0/lv0", 00:31:36.151 "base_bdev_name": "c47bb46a-258f-4226-b862-aace29d0c5d6" 00:31:36.151 } 00:31:36.151 } 00:31:36.151 } 00:31:36.151 ] 00:31:36.151 19:15:50 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:36.151 19:15:50 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:36.151 Running I/O for 3 seconds... 00:31:39.451 00:31:39.451 Latency(us) 00:31:39.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.452 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:39.452 Verification LBA range: start 0x0 length 0x3100 00:31:39.452 COMP_lvs0/lv0 : 3.01 3479.05 13.59 0.00 0.00 9134.91 58.57 15204.35 00:31:39.452 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:39.452 Verification LBA range: start 0x3100 length 0x3100 00:31:39.452 COMP_lvs0/lv0 : 3.01 3505.37 13.69 0.00 0.00 9080.07 56.93 15204.35 00:31:39.452 =================================================================================================================== 00:31:39.452 Total : 6984.43 27.28 0.00 0.00 9107.38 56.93 15204.35 00:31:39.452 0 00:31:39.452 19:15:53 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:31:39.452 19:15:53 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:39.452 19:15:54 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:39.711 19:15:54 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:39.711 19:15:54 compress_isal -- compress/compress.sh@78 -- # killprocess 1831832 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@949 -- # '[' -z 1831832 ']' 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@953 -- # kill -0 1831832 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@954 -- # uname 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1831832 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1831832' 00:31:39.711 killing process with pid 1831832 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@968 -- # kill 1831832 00:31:39.711 Received shutdown signal, test time was about 3.000000 seconds 00:31:39.711 00:31:39.711 Latency(us) 00:31:39.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.711 =================================================================================================================== 00:31:39.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:39.711 19:15:54 compress_isal -- common/autotest_common.sh@973 -- # wait 1831832 00:31:42.248 19:15:56 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:31:42.248 19:15:56 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:31:42.248 19:15:56 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=1833940 00:31:42.248 19:15:56 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:42.248 19:15:56 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:31:42.248 19:15:56 compress_isal -- compress/compress.sh@57 -- # waitforlisten 1833940 00:31:42.248 19:15:56 compress_isal -- common/autotest_common.sh@830 -- # '[' -z 1833940 ']' 00:31:42.248 19:15:56 compress_isal -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.248 19:15:56 compress_isal -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:42.248 19:15:56 compress_isal -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.248 19:15:56 compress_isal -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:42.248 19:15:56 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:42.248 [2024-06-10 19:15:56.577730] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:42.248 [2024-06-10 19:15:56.577792] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833940 ] 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:42.248 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:42.248 [2024-06-10 19:15:56.703405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:42.248 [2024-06-10 19:15:56.791860] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.248 [2024-06-10 19:15:56.791953] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.248 [2024-06-10 19:15:56.791957] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.816 19:15:57 compress_isal -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:42.816 19:15:57 compress_isal -- common/autotest_common.sh@863 -- # return 0 00:31:42.816 19:15:57 compress_isal -- compress/compress.sh@58 -- # create_vols 00:31:42.816 19:15:57 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:42.816 19:15:57 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:46.105 19:16:00 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=Nvme0n1 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:46.105 19:16:00 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:46.364 [ 00:31:46.364 { 00:31:46.364 "name": "Nvme0n1", 00:31:46.364 "aliases": [ 00:31:46.364 "534a9bf1-73fa-4d9c-90b5-3331bbd0cac0" 00:31:46.364 ], 00:31:46.364 "product_name": "NVMe disk", 00:31:46.364 "block_size": 512, 00:31:46.364 "num_blocks": 3125627568, 00:31:46.364 "uuid": "534a9bf1-73fa-4d9c-90b5-3331bbd0cac0", 00:31:46.364 "assigned_rate_limits": { 00:31:46.364 "rw_ios_per_sec": 0, 00:31:46.364 "rw_mbytes_per_sec": 0, 00:31:46.364 "r_mbytes_per_sec": 0, 00:31:46.364 "w_mbytes_per_sec": 0 00:31:46.364 }, 00:31:46.364 "claimed": false, 00:31:46.364 "zoned": false, 00:31:46.364 "supported_io_types": { 00:31:46.364 "read": true, 00:31:46.364 "write": true, 00:31:46.364 "unmap": true, 00:31:46.364 "write_zeroes": true, 00:31:46.364 "flush": true, 00:31:46.364 "reset": true, 00:31:46.364 "compare": false, 00:31:46.364 "compare_and_write": false, 00:31:46.364 "abort": true, 00:31:46.364 "nvme_admin": true, 00:31:46.364 "nvme_io": true 00:31:46.364 }, 00:31:46.364 "driver_specific": { 00:31:46.364 "nvme": [ 00:31:46.364 { 00:31:46.364 "pci_address": "0000:d8:00.0", 00:31:46.364 "trid": { 00:31:46.364 "trtype": "PCIe", 00:31:46.364 "traddr": "0000:d8:00.0" 00:31:46.364 }, 00:31:46.364 "ctrlr_data": { 00:31:46.364 "cntlid": 0, 00:31:46.364 "vendor_id": "0x8086", 00:31:46.364 "model_number": "INTEL SSDPE2KE016T8", 00:31:46.364 "serial_number": "PHLN036005WL1P6AGN", 00:31:46.364 "firmware_revision": "VDV10184", 00:31:46.364 "oacs": { 00:31:46.364 "security": 0, 00:31:46.364 "format": 1, 00:31:46.364 "firmware": 1, 00:31:46.364 "ns_manage": 1 00:31:46.364 }, 00:31:46.364 "multi_ctrlr": false, 00:31:46.364 "ana_reporting": false 00:31:46.364 }, 00:31:46.364 "vs": { 00:31:46.364 "nvme_version": "1.2" 00:31:46.364 }, 00:31:46.364 "ns_data": { 00:31:46.364 "id": 1, 00:31:46.364 "can_share": false 00:31:46.364 } 00:31:46.364 } 00:31:46.364 ], 00:31:46.364 "mp_policy": "active_passive" 00:31:46.364 } 00:31:46.364 } 00:31:46.364 ] 00:31:46.364 19:16:01 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:46.364 19:16:01 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:47.742 d852fb89-03ff-4aad-81d6-50e3c80195f3 00:31:47.742 19:16:02 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:47.742 ba906887-4c99-499e-b06a-705636378dc8 00:31:47.742 19:16:02 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:47.742 19:16:02 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=lvs0/lv0 00:31:47.742 19:16:02 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:47.742 19:16:02 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:47.742 19:16:02 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:47.742 19:16:02 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:47.742 19:16:02 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:48.001 19:16:02 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:48.260 [ 00:31:48.260 { 00:31:48.260 "name": "ba906887-4c99-499e-b06a-705636378dc8", 00:31:48.260 "aliases": [ 00:31:48.260 "lvs0/lv0" 00:31:48.260 ], 00:31:48.260 "product_name": "Logical Volume", 00:31:48.260 "block_size": 512, 00:31:48.260 "num_blocks": 204800, 00:31:48.260 "uuid": "ba906887-4c99-499e-b06a-705636378dc8", 00:31:48.260 "assigned_rate_limits": { 00:31:48.260 "rw_ios_per_sec": 0, 00:31:48.260 "rw_mbytes_per_sec": 0, 00:31:48.260 "r_mbytes_per_sec": 0, 00:31:48.260 "w_mbytes_per_sec": 0 00:31:48.260 }, 00:31:48.260 "claimed": false, 00:31:48.260 "zoned": false, 00:31:48.260 "supported_io_types": { 00:31:48.260 "read": true, 00:31:48.260 "write": true, 00:31:48.260 "unmap": true, 00:31:48.260 "write_zeroes": true, 00:31:48.260 "flush": false, 00:31:48.260 "reset": true, 00:31:48.260 "compare": false, 00:31:48.260 "compare_and_write": false, 00:31:48.260 "abort": false, 00:31:48.260 "nvme_admin": false, 00:31:48.260 "nvme_io": false 00:31:48.260 }, 00:31:48.260 "driver_specific": { 00:31:48.260 "lvol": { 00:31:48.260 "lvol_store_uuid": "d852fb89-03ff-4aad-81d6-50e3c80195f3", 00:31:48.260 "base_bdev": "Nvme0n1", 00:31:48.260 "thin_provision": true, 00:31:48.260 "num_allocated_clusters": 0, 00:31:48.260 "snapshot": false, 00:31:48.260 "clone": false, 00:31:48.260 "esnap_clone": false 00:31:48.260 } 00:31:48.260 } 00:31:48.260 } 00:31:48.260 ] 00:31:48.260 19:16:02 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:48.260 19:16:02 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:48.260 19:16:02 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:48.519 [2024-06-10 19:16:03.042118] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:48.519 COMP_lvs0/lv0 00:31:48.519 19:16:03 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:48.519 19:16:03 compress_isal -- common/autotest_common.sh@898 -- # local bdev_name=COMP_lvs0/lv0 00:31:48.519 19:16:03 compress_isal -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:31:48.519 19:16:03 compress_isal -- common/autotest_common.sh@900 -- # local i 00:31:48.519 19:16:03 compress_isal -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:31:48.519 19:16:03 compress_isal -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:31:48.519 19:16:03 compress_isal -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:48.778 19:16:03 compress_isal -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:48.778 [ 00:31:48.778 { 00:31:48.778 "name": "COMP_lvs0/lv0", 00:31:48.778 "aliases": [ 00:31:48.778 "027efc83-d212-5640-b16b-003df7122f3a" 00:31:48.778 ], 00:31:48.778 "product_name": "compress", 00:31:48.778 "block_size": 512, 00:31:48.778 "num_blocks": 200704, 00:31:48.778 "uuid": "027efc83-d212-5640-b16b-003df7122f3a", 00:31:48.778 "assigned_rate_limits": { 00:31:48.778 "rw_ios_per_sec": 0, 00:31:48.778 "rw_mbytes_per_sec": 0, 00:31:48.778 "r_mbytes_per_sec": 0, 00:31:48.778 "w_mbytes_per_sec": 0 00:31:48.778 }, 00:31:48.778 "claimed": false, 00:31:48.778 "zoned": false, 00:31:48.778 "supported_io_types": { 00:31:48.778 "read": true, 00:31:48.778 "write": true, 00:31:48.778 "unmap": false, 00:31:48.778 "write_zeroes": true, 00:31:48.778 "flush": false, 00:31:48.778 "reset": false, 00:31:48.778 "compare": false, 00:31:48.778 "compare_and_write": false, 00:31:48.778 "abort": false, 00:31:48.778 "nvme_admin": false, 00:31:48.778 "nvme_io": false 00:31:48.778 }, 00:31:48.778 "driver_specific": { 00:31:48.778 "compress": { 00:31:48.778 "name": "COMP_lvs0/lv0", 00:31:48.778 "base_bdev_name": "ba906887-4c99-499e-b06a-705636378dc8" 00:31:48.778 } 00:31:48.778 } 00:31:48.778 } 00:31:48.778 ] 00:31:49.037 19:16:03 compress_isal -- common/autotest_common.sh@906 -- # return 0 00:31:49.037 19:16:03 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:49.037 I/O targets: 00:31:49.037 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:31:49.037 00:31:49.037 00:31:49.037 CUnit - A unit testing framework for C - Version 2.1-3 00:31:49.037 http://cunit.sourceforge.net/ 00:31:49.037 00:31:49.037 00:31:49.037 Suite: bdevio tests on: COMP_lvs0/lv0 00:31:49.037 Test: blockdev write read block ...passed 00:31:49.037 Test: blockdev write zeroes read block ...passed 00:31:49.037 Test: blockdev write zeroes read no split ...passed 00:31:49.037 Test: blockdev write zeroes read split ...passed 00:31:49.038 Test: blockdev write zeroes read split partial ...passed 00:31:49.038 Test: blockdev reset ...[2024-06-10 19:16:03.700225] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:31:49.038 passed 00:31:49.038 Test: blockdev write read 8 blocks ...passed 00:31:49.038 Test: blockdev write read size > 128k ...passed 00:31:49.038 Test: blockdev write read invalid size ...passed 00:31:49.038 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:49.038 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:49.038 Test: blockdev write read max offset ...passed 00:31:49.038 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:49.038 Test: blockdev writev readv 8 blocks ...passed 00:31:49.038 Test: blockdev writev readv 30 x 1block ...passed 00:31:49.038 Test: blockdev writev readv block ...passed 00:31:49.038 Test: blockdev writev readv size > 128k ...passed 00:31:49.038 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:49.038 Test: blockdev comparev and writev ...passed 00:31:49.038 Test: blockdev nvme passthru rw ...passed 00:31:49.038 Test: blockdev nvme passthru vendor specific ...passed 00:31:49.038 Test: blockdev nvme admin passthru ...passed 00:31:49.038 Test: blockdev copy ...passed 00:31:49.038 00:31:49.038 Run Summary: Type Total Ran Passed Failed Inactive 00:31:49.038 suites 1 1 n/a 0 0 00:31:49.038 tests 23 23 23 0 0 00:31:49.038 asserts 130 130 130 0 n/a 00:31:49.038 00:31:49.038 Elapsed time = 0.166 seconds 00:31:49.038 0 00:31:49.038 19:16:03 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:31:49.038 19:16:03 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:49.297 19:16:03 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:49.556 19:16:04 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:31:49.556 19:16:04 compress_isal -- compress/compress.sh@62 -- # killprocess 1833940 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@949 -- # '[' -z 1833940 ']' 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@953 -- # kill -0 1833940 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@954 -- # uname 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1833940 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1833940' 00:31:49.556 killing process with pid 1833940 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@968 -- # kill 1833940 00:31:49.556 19:16:04 compress_isal -- common/autotest_common.sh@973 -- # wait 1833940 00:31:52.092 19:16:06 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:31:52.092 19:16:06 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:31:52.092 00:31:52.092 real 0m47.847s 00:31:52.092 user 1m49.665s 00:31:52.092 sys 0m4.041s 00:31:52.092 19:16:06 compress_isal -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:52.092 19:16:06 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:52.092 ************************************ 00:31:52.092 END TEST compress_isal 00:31:52.092 ************************************ 00:31:52.092 19:16:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:52.092 19:16:06 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:31:52.092 19:16:06 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:31:52.092 19:16:06 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:52.093 19:16:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:52.093 19:16:06 -- common/autotest_common.sh@10 -- # set +x 00:31:52.093 ************************************ 00:31:52.093 START TEST blockdev_crypto_aesni 00:31:52.093 ************************************ 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:31:52.093 * Looking for test storage... 00:31:52.093 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1835702 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 1835702 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@830 -- # '[' -z 1835702 ']' 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:52.093 19:16:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:52.093 19:16:06 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:52.093 [2024-06-10 19:16:06.626847] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:52.093 [2024-06-10 19:16:06.626912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835702 ] 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:52.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.093 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:52.093 [2024-06-10 19:16:06.761961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.093 [2024-06-10 19:16:06.847710] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.031 19:16:07 blockdev_crypto_aesni -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:53.031 19:16:07 blockdev_crypto_aesni -- common/autotest_common.sh@863 -- # return 0 00:31:53.031 19:16:07 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:31:53.031 19:16:07 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:31:53.031 19:16:07 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:31:53.031 19:16:07 blockdev_crypto_aesni -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.031 19:16:07 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:53.031 [2024-06-10 19:16:07.513823] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:53.031 [2024-06-10 19:16:07.521860] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:53.031 [2024-06-10 19:16:07.529873] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:53.031 [2024-06-10 19:16:07.592513] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:55.562 true 00:31:55.562 true 00:31:55.562 true 00:31:55.562 true 00:31:55.562 Malloc0 00:31:55.562 Malloc1 00:31:55.562 Malloc2 00:31:55.563 Malloc3 00:31:55.563 [2024-06-10 19:16:09.948453] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:55.563 crypto_ram 00:31:55.563 [2024-06-10 19:16:09.956478] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:55.563 crypto_ram2 00:31:55.563 [2024-06-10 19:16:09.964496] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:55.563 crypto_ram3 00:31:55.563 [2024-06-10 19:16:09.972518] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:55.563 crypto_ram4 00:31:55.563 19:16:09 blockdev_crypto_aesni -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.563 19:16:09 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:31:55.563 19:16:09 blockdev_crypto_aesni -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.563 19:16:09 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:55.563 19:16:09 blockdev_crypto_aesni -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.563 19:16:09 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:31:55.563 19:16:09 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:31:55.563 19:16:09 blockdev_crypto_aesni -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.563 19:16:09 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "91cfd132-eaa4-5b1e-b657-adfd7e4a57ed"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "91cfd132-eaa4-5b1e-b657-adfd7e4a57ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5c807141-11f2-5ca8-be50-e28736df1a23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5c807141-11f2-5ca8-be50-e28736df1a23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "01a6a738-fde4-5ab8-b632-f967fcba03e1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "01a6a738-fde4-5ab8-b632-f967fcba03e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "c837cab3-5551-52da-bc92-30e52ef47e40"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c837cab3-5551-52da-bc92-30e52ef47e40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:31:55.563 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 1835702 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@949 -- # '[' -z 1835702 ']' 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # kill -0 1835702 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # uname 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1835702 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1835702' 00:31:55.563 killing process with pid 1835702 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # kill 1835702 00:31:55.563 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@973 -- # wait 1835702 00:31:56.132 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:56.132 19:16:10 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:56.132 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:31:56.132 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:56.132 19:16:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:56.132 ************************************ 00:31:56.132 START TEST bdev_hello_world 00:31:56.132 ************************************ 00:31:56.132 19:16:10 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:56.132 [2024-06-10 19:16:10.825479] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:56.132 [2024-06-10 19:16:10.825531] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836430 ] 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:56.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.392 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:56.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:56.393 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:56.393 [2024-06-10 19:16:10.956643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.393 [2024-06-10 19:16:11.039355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.393 [2024-06-10 19:16:11.060545] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:56.393 [2024-06-10 19:16:11.068569] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:56.393 [2024-06-10 19:16:11.076588] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:56.652 [2024-06-10 19:16:11.185032] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:58.628 [2024-06-10 19:16:13.384304] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:58.628 [2024-06-10 19:16:13.384370] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:58.628 [2024-06-10 19:16:13.384385] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:58.887 [2024-06-10 19:16:13.392324] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:58.887 [2024-06-10 19:16:13.392343] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:58.887 [2024-06-10 19:16:13.392353] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:58.887 [2024-06-10 19:16:13.400343] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:58.887 [2024-06-10 19:16:13.400359] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:58.887 [2024-06-10 19:16:13.400369] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:58.887 [2024-06-10 19:16:13.408364] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:58.887 [2024-06-10 19:16:13.408380] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:58.887 [2024-06-10 19:16:13.408390] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:58.887 [2024-06-10 19:16:13.479998] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:58.887 [2024-06-10 19:16:13.480039] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:31:58.887 [2024-06-10 19:16:13.480056] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:58.887 [2024-06-10 19:16:13.481224] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:58.887 [2024-06-10 19:16:13.481299] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:58.887 [2024-06-10 19:16:13.481315] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:58.887 [2024-06-10 19:16:13.481356] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:58.887 00:31:58.887 [2024-06-10 19:16:13.481373] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:59.147 00:31:59.147 real 0m3.057s 00:31:59.147 user 0m2.670s 00:31:59.147 sys 0m0.347s 00:31:59.147 19:16:13 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:59.147 19:16:13 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:59.147 ************************************ 00:31:59.147 END TEST bdev_hello_world 00:31:59.147 ************************************ 00:31:59.147 19:16:13 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:31:59.147 19:16:13 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:59.147 19:16:13 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:59.147 19:16:13 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:59.407 ************************************ 00:31:59.407 START TEST bdev_bounds 00:31:59.407 ************************************ 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1836976 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1836976' 00:31:59.407 Process bdevio pid: 1836976 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1836976 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 1836976 ']' 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:59.407 19:16:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:59.407 [2024-06-10 19:16:13.971859] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:31:59.407 [2024-06-10 19:16:13.971919] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836976 ] 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.0 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.1 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.2 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.3 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.4 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.5 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.6 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:01.7 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.0 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.1 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.2 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.3 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.4 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.5 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.6 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b6:02.7 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.0 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.1 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.2 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.3 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.4 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.5 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.6 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:01.7 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:02.0 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.407 EAL: Requested device 0000:b8:02.1 cannot be used 00:31:59.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.408 EAL: Requested device 0000:b8:02.2 cannot be used 00:31:59.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.408 EAL: Requested device 0000:b8:02.3 cannot be used 00:31:59.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.408 EAL: Requested device 0000:b8:02.4 cannot be used 00:31:59.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.408 EAL: Requested device 0000:b8:02.5 cannot be used 00:31:59.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.408 EAL: Requested device 0000:b8:02.6 cannot be used 00:31:59.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:59.408 EAL: Requested device 0000:b8:02.7 cannot be used 00:31:59.408 [2024-06-10 19:16:14.103631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.667 [2024-06-10 19:16:14.194823] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.667 [2024-06-10 19:16:14.194916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.667 [2024-06-10 19:16:14.194920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.667 [2024-06-10 19:16:14.216154] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:59.667 [2024-06-10 19:16:14.224184] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:59.667 [2024-06-10 19:16:14.232198] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:59.667 [2024-06-10 19:16:14.331924] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:02.203 [2024-06-10 19:16:16.527486] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:02.203 [2024-06-10 19:16:16.527565] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:02.203 [2024-06-10 19:16:16.527585] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:02.203 [2024-06-10 19:16:16.535504] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:02.203 [2024-06-10 19:16:16.535522] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:02.203 [2024-06-10 19:16:16.535533] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:02.203 [2024-06-10 19:16:16.543529] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:02.203 [2024-06-10 19:16:16.543546] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:02.204 [2024-06-10 19:16:16.543556] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:02.204 [2024-06-10 19:16:16.551551] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:02.204 [2024-06-10 19:16:16.551568] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:02.204 [2024-06-10 19:16:16.551583] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:02.204 19:16:16 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:02.204 19:16:16 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:32:02.204 19:16:16 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:02.204 I/O targets: 00:32:02.204 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:32:02.204 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:32:02.204 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:32:02.204 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:32:02.204 00:32:02.204 00:32:02.204 CUnit - A unit testing framework for C - Version 2.1-3 00:32:02.204 http://cunit.sourceforge.net/ 00:32:02.204 00:32:02.204 00:32:02.204 Suite: bdevio tests on: crypto_ram4 00:32:02.204 Test: blockdev write read block ...passed 00:32:02.204 Test: blockdev write zeroes read block ...passed 00:32:02.204 Test: blockdev write zeroes read no split ...passed 00:32:02.204 Test: blockdev write zeroes read split ...passed 00:32:02.204 Test: blockdev write zeroes read split partial ...passed 00:32:02.204 Test: blockdev reset ...passed 00:32:02.204 Test: blockdev write read 8 blocks ...passed 00:32:02.204 Test: blockdev write read size > 128k ...passed 00:32:02.204 Test: blockdev write read invalid size ...passed 00:32:02.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.204 Test: blockdev write read max offset ...passed 00:32:02.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.204 Test: blockdev writev readv 8 blocks ...passed 00:32:02.204 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.204 Test: blockdev writev readv block ...passed 00:32:02.204 Test: blockdev writev readv size > 128k ...passed 00:32:02.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.204 Test: blockdev comparev and writev ...passed 00:32:02.204 Test: blockdev nvme passthru rw ...passed 00:32:02.204 Test: blockdev nvme passthru vendor specific ...passed 00:32:02.204 Test: blockdev nvme admin passthru ...passed 00:32:02.204 Test: blockdev copy ...passed 00:32:02.204 Suite: bdevio tests on: crypto_ram3 00:32:02.204 Test: blockdev write read block ...passed 00:32:02.204 Test: blockdev write zeroes read block ...passed 00:32:02.204 Test: blockdev write zeroes read no split ...passed 00:32:02.204 Test: blockdev write zeroes read split ...passed 00:32:02.204 Test: blockdev write zeroes read split partial ...passed 00:32:02.204 Test: blockdev reset ...passed 00:32:02.204 Test: blockdev write read 8 blocks ...passed 00:32:02.204 Test: blockdev write read size > 128k ...passed 00:32:02.204 Test: blockdev write read invalid size ...passed 00:32:02.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.204 Test: blockdev write read max offset ...passed 00:32:02.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.204 Test: blockdev writev readv 8 blocks ...passed 00:32:02.204 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.204 Test: blockdev writev readv block ...passed 00:32:02.204 Test: blockdev writev readv size > 128k ...passed 00:32:02.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.204 Test: blockdev comparev and writev ...passed 00:32:02.204 Test: blockdev nvme passthru rw ...passed 00:32:02.204 Test: blockdev nvme passthru vendor specific ...passed 00:32:02.204 Test: blockdev nvme admin passthru ...passed 00:32:02.204 Test: blockdev copy ...passed 00:32:02.204 Suite: bdevio tests on: crypto_ram2 00:32:02.204 Test: blockdev write read block ...passed 00:32:02.204 Test: blockdev write zeroes read block ...passed 00:32:02.204 Test: blockdev write zeroes read no split ...passed 00:32:02.204 Test: blockdev write zeroes read split ...passed 00:32:02.204 Test: blockdev write zeroes read split partial ...passed 00:32:02.204 Test: blockdev reset ...passed 00:32:02.204 Test: blockdev write read 8 blocks ...passed 00:32:02.204 Test: blockdev write read size > 128k ...passed 00:32:02.204 Test: blockdev write read invalid size ...passed 00:32:02.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.204 Test: blockdev write read max offset ...passed 00:32:02.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.204 Test: blockdev writev readv 8 blocks ...passed 00:32:02.204 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.204 Test: blockdev writev readv block ...passed 00:32:02.204 Test: blockdev writev readv size > 128k ...passed 00:32:02.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.204 Test: blockdev comparev and writev ...passed 00:32:02.204 Test: blockdev nvme passthru rw ...passed 00:32:02.204 Test: blockdev nvme passthru vendor specific ...passed 00:32:02.204 Test: blockdev nvme admin passthru ...passed 00:32:02.204 Test: blockdev copy ...passed 00:32:02.204 Suite: bdevio tests on: crypto_ram 00:32:02.204 Test: blockdev write read block ...passed 00:32:02.204 Test: blockdev write zeroes read block ...passed 00:32:02.204 Test: blockdev write zeroes read no split ...passed 00:32:02.204 Test: blockdev write zeroes read split ...passed 00:32:02.464 Test: blockdev write zeroes read split partial ...passed 00:32:02.464 Test: blockdev reset ...passed 00:32:02.464 Test: blockdev write read 8 blocks ...passed 00:32:02.464 Test: blockdev write read size > 128k ...passed 00:32:02.464 Test: blockdev write read invalid size ...passed 00:32:02.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.464 Test: blockdev write read max offset ...passed 00:32:02.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.464 Test: blockdev writev readv 8 blocks ...passed 00:32:02.464 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.464 Test: blockdev writev readv block ...passed 00:32:02.464 Test: blockdev writev readv size > 128k ...passed 00:32:02.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.464 Test: blockdev comparev and writev ...passed 00:32:02.464 Test: blockdev nvme passthru rw ...passed 00:32:02.464 Test: blockdev nvme passthru vendor specific ...passed 00:32:02.464 Test: blockdev nvme admin passthru ...passed 00:32:02.464 Test: blockdev copy ...passed 00:32:02.464 00:32:02.464 Run Summary: Type Total Ran Passed Failed Inactive 00:32:02.464 suites 4 4 n/a 0 0 00:32:02.464 tests 92 92 92 0 0 00:32:02.464 asserts 520 520 520 0 n/a 00:32:02.464 00:32:02.464 Elapsed time = 0.517 seconds 00:32:02.464 0 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1836976 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 1836976 ']' 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 1836976 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1836976 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1836976' 00:32:02.464 killing process with pid 1836976 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # kill 1836976 00:32:02.464 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@973 -- # wait 1836976 00:32:02.723 19:16:17 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:32:02.723 00:32:02.723 real 0m3.514s 00:32:02.723 user 0m9.817s 00:32:02.723 sys 0m0.547s 00:32:02.723 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:02.723 19:16:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:02.723 ************************************ 00:32:02.723 END TEST bdev_bounds 00:32:02.723 ************************************ 00:32:02.723 19:16:17 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:32:02.723 19:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:32:02.723 19:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:02.723 19:16:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:02.983 ************************************ 00:32:02.983 START TEST bdev_nbd 00:32:02.983 ************************************ 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1837543 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1837543 /var/tmp/spdk-nbd.sock 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 1837543 ']' 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:02.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:02.983 19:16:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:02.983 [2024-06-10 19:16:17.576370] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:32:02.983 [2024-06-10 19:16:17.576424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.0 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.1 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.2 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.3 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.4 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.5 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.6 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:01.7 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:02.0 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:02.1 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:02.2 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:02.3 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:02.4 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.983 EAL: Requested device 0000:b6:02.5 cannot be used 00:32:02.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b6:02.6 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b6:02.7 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.0 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.1 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.2 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.3 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.4 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.5 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.6 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:01.7 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.0 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.1 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.2 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.3 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.4 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.5 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.6 cannot be used 00:32:02.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.984 EAL: Requested device 0000:b8:02.7 cannot be used 00:32:02.984 [2024-06-10 19:16:17.710871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.243 [2024-06-10 19:16:17.797668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.243 [2024-06-10 19:16:17.818860] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:03.243 [2024-06-10 19:16:17.826883] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:03.243 [2024-06-10 19:16:17.834897] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:03.243 [2024-06-10 19:16:17.945597] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:05.780 [2024-06-10 19:16:20.154211] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:05.780 [2024-06-10 19:16:20.154275] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:05.780 [2024-06-10 19:16:20.154290] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:05.780 [2024-06-10 19:16:20.162233] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:05.780 [2024-06-10 19:16:20.162251] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:05.780 [2024-06-10 19:16:20.162262] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:05.780 [2024-06-10 19:16:20.170250] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:05.780 [2024-06-10 19:16:20.170267] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:05.780 [2024-06-10 19:16:20.170277] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:05.780 [2024-06-10 19:16:20.178271] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:05.780 [2024-06-10 19:16:20.178286] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:05.780 [2024-06-10 19:16:20.178296] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:05.780 1+0 records in 00:32:05.780 1+0 records out 00:32:05.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026555 s, 15.4 MB/s 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:05.780 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:06.040 1+0 records in 00:32:06.040 1+0 records out 00:32:06.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286087 s, 14.3 MB/s 00:32:06.040 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.299 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:06.299 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.299 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:06.300 19:16:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:06.300 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:06.300 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:06.300 19:16:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:06.300 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:06.559 1+0 records in 00:32:06.559 1+0 records out 00:32:06.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034392 s, 11.9 MB/s 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:32:06.559 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:06.819 1+0 records in 00:32:06.819 1+0 records out 00:32:06.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304432 s, 13.5 MB/s 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd0", 00:32:06.819 "bdev_name": "crypto_ram" 00:32:06.819 }, 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd1", 00:32:06.819 "bdev_name": "crypto_ram2" 00:32:06.819 }, 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd2", 00:32:06.819 "bdev_name": "crypto_ram3" 00:32:06.819 }, 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd3", 00:32:06.819 "bdev_name": "crypto_ram4" 00:32:06.819 } 00:32:06.819 ]' 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd0", 00:32:06.819 "bdev_name": "crypto_ram" 00:32:06.819 }, 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd1", 00:32:06.819 "bdev_name": "crypto_ram2" 00:32:06.819 }, 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd2", 00:32:06.819 "bdev_name": "crypto_ram3" 00:32:06.819 }, 00:32:06.819 { 00:32:06.819 "nbd_device": "/dev/nbd3", 00:32:06.819 "bdev_name": "crypto_ram4" 00:32:06.819 } 00:32:06.819 ]' 00:32:06.819 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.079 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.338 19:16:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:07.338 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.597 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:07.856 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:08.116 19:16:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:32:08.376 /dev/nbd0 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:08.376 1+0 records in 00:32:08.376 1+0 records out 00:32:08.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258108 s, 15.9 MB/s 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:08.376 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:08.635 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:08.635 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:08.635 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:32:08.636 /dev/nbd1 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:08.636 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:08.895 1+0 records in 00:32:08.895 1+0 records out 00:32:08.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299845 s, 13.7 MB/s 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:08.895 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:32:08.895 /dev/nbd10 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.155 1+0 records in 00:32:09.155 1+0 records out 00:32:09.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277402 s, 14.8 MB/s 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:09.155 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:32:09.155 /dev/nbd11 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.415 1+0 records in 00:32:09.415 1+0 records out 00:32:09.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330997 s, 12.4 MB/s 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:09.415 19:16:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd0", 00:32:09.675 "bdev_name": "crypto_ram" 00:32:09.675 }, 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd1", 00:32:09.675 "bdev_name": "crypto_ram2" 00:32:09.675 }, 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd10", 00:32:09.675 "bdev_name": "crypto_ram3" 00:32:09.675 }, 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd11", 00:32:09.675 "bdev_name": "crypto_ram4" 00:32:09.675 } 00:32:09.675 ]' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd0", 00:32:09.675 "bdev_name": "crypto_ram" 00:32:09.675 }, 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd1", 00:32:09.675 "bdev_name": "crypto_ram2" 00:32:09.675 }, 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd10", 00:32:09.675 "bdev_name": "crypto_ram3" 00:32:09.675 }, 00:32:09.675 { 00:32:09.675 "nbd_device": "/dev/nbd11", 00:32:09.675 "bdev_name": "crypto_ram4" 00:32:09.675 } 00:32:09.675 ]' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:09.675 /dev/nbd1 00:32:09.675 /dev/nbd10 00:32:09.675 /dev/nbd11' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:09.675 /dev/nbd1 00:32:09.675 /dev/nbd10 00:32:09.675 /dev/nbd11' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:09.675 256+0 records in 00:32:09.675 256+0 records out 00:32:09.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106086 s, 98.8 MB/s 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:09.675 256+0 records in 00:32:09.675 256+0 records out 00:32:09.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363596 s, 28.8 MB/s 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:09.675 256+0 records in 00:32:09.675 256+0 records out 00:32:09.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404669 s, 25.9 MB/s 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:32:09.675 256+0 records in 00:32:09.675 256+0 records out 00:32:09.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0550997 s, 19.0 MB/s 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:09.675 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:32:09.934 256+0 records in 00:32:09.934 256+0 records out 00:32:09.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035211 s, 29.8 MB/s 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:09.934 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:09.935 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.194 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.453 19:16:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:32:10.453 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:10.714 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:10.974 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:10.974 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:10.974 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:11.234 malloc_lvol_verify 00:32:11.234 19:16:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:11.494 59182564-6b9f-4bc0-92d4-12fc701724f4 00:32:11.494 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:11.753 5875e36c-480c-41d0-ad07-34150f1db8a1 00:32:11.753 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:12.012 /dev/nbd0 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:12.012 mke2fs 1.46.5 (30-Dec-2021) 00:32:12.012 Discarding device blocks: 0/4096 done 00:32:12.012 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:12.012 00:32:12.012 Allocating group tables: 0/1 done 00:32:12.012 Writing inode tables: 0/1 done 00:32:12.012 Creating journal (1024 blocks): done 00:32:12.012 Writing superblocks and filesystem accounting information: 0/1 done 00:32:12.012 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:12.012 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1837543 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 1837543 ']' 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 1837543 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1837543 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1837543' 00:32:12.272 killing process with pid 1837543 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # kill 1837543 00:32:12.272 19:16:26 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@973 -- # wait 1837543 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:32:12.841 00:32:12.841 real 0m9.812s 00:32:12.841 user 0m12.757s 00:32:12.841 sys 0m3.865s 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:12.841 ************************************ 00:32:12.841 END TEST bdev_nbd 00:32:12.841 ************************************ 00:32:12.841 19:16:27 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:32:12.841 19:16:27 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:32:12.841 19:16:27 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:12.841 19:16:27 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:12.841 ************************************ 00:32:12.841 START TEST bdev_fio 00:32:12.841 ************************************ 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:12.841 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:32:12.841 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:12.842 ************************************ 00:32:12.842 START TEST bdev_fio_rw_verify 00:32:12.842 ************************************ 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:12.842 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:13.130 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:13.130 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:13.130 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:13.131 19:16:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:13.407 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:13.407 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:13.407 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:13.407 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:13.407 fio-3.35 00:32:13.407 Starting 4 threads 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.0 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.1 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.2 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.3 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.4 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.5 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.6 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:01.7 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.0 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.1 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.2 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.3 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.4 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.5 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.6 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b6:02.7 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.0 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.1 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.2 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.3 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.4 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.5 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.6 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:01.7 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.0 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.1 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.2 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.3 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.4 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.5 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.6 cannot be used 00:32:13.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.407 EAL: Requested device 0000:b8:02.7 cannot be used 00:32:28.276 00:32:28.276 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1840011: Mon Jun 10 19:16:40 2024 00:32:28.276 read: IOPS=24.7k, BW=96.4MiB/s (101MB/s)(964MiB/10001msec) 00:32:28.276 slat (usec): min=15, max=487, avg=53.40, stdev=36.02 00:32:28.276 clat (usec): min=11, max=1585, avg=287.57, stdev=203.51 00:32:28.276 lat (usec): min=39, max=1769, avg=340.97, stdev=226.06 00:32:28.276 clat percentiles (usec): 00:32:28.276 | 50.000th=[ 235], 99.000th=[ 1029], 99.900th=[ 1205], 99.990th=[ 1303], 00:32:28.276 | 99.999th=[ 1385] 00:32:28.276 write: IOPS=27.2k, BW=106MiB/s (112MB/s)(1036MiB/9730msec); 0 zone resets 00:32:28.276 slat (usec): min=22, max=1541, avg=65.05, stdev=35.34 00:32:28.276 clat (usec): min=32, max=2728, avg=351.84, stdev=236.91 00:32:28.276 lat (usec): min=74, max=2900, avg=416.89, stdev=258.42 00:32:28.276 clat percentiles (usec): 00:32:28.276 | 50.000th=[ 302], 99.000th=[ 1237], 99.900th=[ 1450], 99.990th=[ 1729], 00:32:28.276 | 99.999th=[ 2507] 00:32:28.276 bw ( KiB/s): min=90632, max=139704, per=97.45%, avg=106201.26, stdev=3274.69, samples=76 00:32:28.276 iops : min=22658, max=34926, avg=26550.32, stdev=818.67, samples=76 00:32:28.276 lat (usec) : 20=0.01%, 50=0.01%, 100=8.51%, 250=37.44%, 500=38.30% 00:32:28.276 lat (usec) : 750=9.67%, 1000=4.18% 00:32:28.276 lat (msec) : 2=1.88%, 4=0.01% 00:32:28.276 cpu : usr=99.62%, sys=0.00%, ctx=67, majf=0, minf=253 00:32:28.276 IO depths : 1=10.4%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:28.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.276 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.276 issued rwts: total=246864,265106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.276 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:28.276 00:32:28.276 Run status group 0 (all jobs): 00:32:28.276 READ: bw=96.4MiB/s (101MB/s), 96.4MiB/s-96.4MiB/s (101MB/s-101MB/s), io=964MiB (1011MB), run=10001-10001msec 00:32:28.276 WRITE: bw=106MiB/s (112MB/s), 106MiB/s-106MiB/s (112MB/s-112MB/s), io=1036MiB (1086MB), run=9730-9730msec 00:32:28.276 00:32:28.276 real 0m13.505s 00:32:28.276 user 0m53.730s 00:32:28.276 sys 0m0.528s 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:28.276 ************************************ 00:32:28.276 END TEST bdev_fio_rw_verify 00:32:28.276 ************************************ 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:28.276 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "91cfd132-eaa4-5b1e-b657-adfd7e4a57ed"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "91cfd132-eaa4-5b1e-b657-adfd7e4a57ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5c807141-11f2-5ca8-be50-e28736df1a23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5c807141-11f2-5ca8-be50-e28736df1a23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "01a6a738-fde4-5ab8-b632-f967fcba03e1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "01a6a738-fde4-5ab8-b632-f967fcba03e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "c837cab3-5551-52da-bc92-30e52ef47e40"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c837cab3-5551-52da-bc92-30e52ef47e40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:32:28.277 crypto_ram2 00:32:28.277 crypto_ram3 00:32:28.277 crypto_ram4 ]] 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "91cfd132-eaa4-5b1e-b657-adfd7e4a57ed"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "91cfd132-eaa4-5b1e-b657-adfd7e4a57ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5c807141-11f2-5ca8-be50-e28736df1a23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5c807141-11f2-5ca8-be50-e28736df1a23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "01a6a738-fde4-5ab8-b632-f967fcba03e1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "01a6a738-fde4-5ab8-b632-f967fcba03e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "c837cab3-5551-52da-bc92-30e52ef47e40"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c837cab3-5551-52da-bc92-30e52ef47e40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:28.277 ************************************ 00:32:28.277 START TEST bdev_fio_trim 00:32:28.277 ************************************ 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:28.277 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # shift 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libasan 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:28.278 19:16:41 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:28.278 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:28.278 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:28.278 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:28.278 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:28.278 fio-3.35 00:32:28.278 Starting 4 threads 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.0 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.1 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.2 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.3 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.4 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.5 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.6 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:01.7 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.0 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.1 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.2 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.3 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.4 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.5 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.6 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b6:02.7 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.0 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.1 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.2 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.3 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.4 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.5 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.6 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:01.7 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.0 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.1 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.2 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.3 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.4 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.5 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.6 cannot be used 00:32:28.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:28.278 EAL: Requested device 0000:b8:02.7 cannot be used 00:32:40.540 00:32:40.540 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1842537: Mon Jun 10 19:16:54 2024 00:32:40.540 write: IOPS=38.7k, BW=151MiB/s (159MB/s)(1513MiB/10001msec); 0 zone resets 00:32:40.540 slat (usec): min=15, max=415, avg=58.08, stdev=33.95 00:32:40.540 clat (usec): min=38, max=1545, avg=265.11, stdev=169.14 00:32:40.540 lat (usec): min=56, max=1810, avg=323.19, stdev=190.70 00:32:40.540 clat percentiles (usec): 00:32:40.540 | 50.000th=[ 221], 99.000th=[ 840], 99.900th=[ 971], 99.990th=[ 1057], 00:32:40.540 | 99.999th=[ 1467] 00:32:40.540 bw ( KiB/s): min=149424, max=215512, per=100.00%, avg=155159.58, stdev=3698.62, samples=76 00:32:40.540 iops : min=37356, max=53878, avg=38789.89, stdev=924.65, samples=76 00:32:40.540 trim: IOPS=38.7k, BW=151MiB/s (159MB/s)(1513MiB/10001msec); 0 zone resets 00:32:40.540 slat (usec): min=6, max=343, avg=15.95, stdev= 6.46 00:32:40.540 clat (usec): min=34, max=1463, avg=249.49, stdev=109.85 00:32:40.540 lat (usec): min=40, max=1472, avg=265.44, stdev=111.65 00:32:40.540 clat percentiles (usec): 00:32:40.540 | 50.000th=[ 233], 99.000th=[ 586], 99.900th=[ 676], 99.990th=[ 734], 00:32:40.540 | 99.999th=[ 1352] 00:32:40.540 bw ( KiB/s): min=149432, max=215536, per=100.00%, avg=155161.26, stdev=3699.65, samples=76 00:32:40.540 iops : min=37358, max=53884, avg=38790.32, stdev=924.91, samples=76 00:32:40.540 lat (usec) : 50=0.01%, 100=6.80%, 250=50.94%, 500=35.57%, 750=5.50% 00:32:40.540 lat (usec) : 1000=1.17% 00:32:40.540 lat (msec) : 2=0.02% 00:32:40.540 cpu : usr=99.63%, sys=0.00%, ctx=119, majf=0, minf=87 00:32:40.540 IO depths : 1=8.0%, 2=26.3%, 4=52.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:40.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.540 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.540 issued rwts: total=0,387217,387218,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.540 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:40.540 00:32:40.540 Run status group 0 (all jobs): 00:32:40.540 WRITE: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=1513MiB (1586MB), run=10001-10001msec 00:32:40.540 TRIM: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=1513MiB (1586MB), run=10001-10001msec 00:32:40.540 00:32:40.540 real 0m13.513s 00:32:40.540 user 0m53.838s 00:32:40.540 sys 0m0.518s 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:40.540 ************************************ 00:32:40.540 END TEST bdev_fio_trim 00:32:40.540 ************************************ 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:32:40.540 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:32:40.540 00:32:40.540 real 0m27.388s 00:32:40.540 user 1m47.760s 00:32:40.540 sys 0m1.248s 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:40.540 19:16:54 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:40.540 ************************************ 00:32:40.540 END TEST bdev_fio 00:32:40.541 ************************************ 00:32:40.541 19:16:54 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:40.541 19:16:54 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:40.541 19:16:54 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:32:40.541 19:16:54 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:40.541 19:16:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:40.541 ************************************ 00:32:40.541 START TEST bdev_verify 00:32:40.541 ************************************ 00:32:40.541 19:16:54 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:40.541 [2024-06-10 19:16:54.939432] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:32:40.541 [2024-06-10 19:16:54.939487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844202 ] 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.0 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.1 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.2 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.3 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.4 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.5 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.6 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:01.7 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.0 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.1 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.2 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.3 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.4 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.5 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.6 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b6:02.7 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.0 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.1 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.2 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.3 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.4 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.5 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.6 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:01.7 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.0 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.1 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.2 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.3 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.4 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.5 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.6 cannot be used 00:32:40.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:40.541 EAL: Requested device 0000:b8:02.7 cannot be used 00:32:40.541 [2024-06-10 19:16:55.070302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:40.541 [2024-06-10 19:16:55.161603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.541 [2024-06-10 19:16:55.161608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.541 [2024-06-10 19:16:55.182880] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:40.541 [2024-06-10 19:16:55.190918] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:40.541 [2024-06-10 19:16:55.198935] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:40.799 [2024-06-10 19:16:55.301341] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:43.328 [2024-06-10 19:16:57.491197] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:43.328 [2024-06-10 19:16:57.491280] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:43.328 [2024-06-10 19:16:57.491294] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.328 [2024-06-10 19:16:57.499214] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:43.328 [2024-06-10 19:16:57.499231] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:43.328 [2024-06-10 19:16:57.499241] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.328 [2024-06-10 19:16:57.507236] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:43.328 [2024-06-10 19:16:57.507252] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:43.328 [2024-06-10 19:16:57.507263] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.328 [2024-06-10 19:16:57.515258] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:43.328 [2024-06-10 19:16:57.515274] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:43.328 [2024-06-10 19:16:57.515284] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.328 Running I/O for 5 seconds... 00:32:48.595 00:32:48.595 Latency(us) 00:32:48.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.595 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x0 length 0x1000 00:32:48.595 crypto_ram : 5.08 504.35 1.97 0.00 0.00 252854.91 17616.08 171127.60 00:32:48.595 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x1000 length 0x1000 00:32:48.595 crypto_ram : 5.07 504.99 1.97 0.00 0.00 252582.89 17616.08 171127.60 00:32:48.595 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x0 length 0x1000 00:32:48.595 crypto_ram2 : 5.08 504.26 1.97 0.00 0.00 251922.25 19608.37 151833.80 00:32:48.595 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x1000 length 0x1000 00:32:48.595 crypto_ram2 : 5.07 504.89 1.97 0.00 0.00 251668.71 19608.37 152672.67 00:32:48.595 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x0 length 0x1000 00:32:48.595 crypto_ram3 : 5.06 3968.44 15.50 0.00 0.00 31875.51 7130.32 27892.12 00:32:48.595 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x1000 length 0x1000 00:32:48.595 crypto_ram3 : 5.06 3972.76 15.52 0.00 0.00 31839.36 7549.75 27682.41 00:32:48.595 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x0 length 0x1000 00:32:48.595 crypto_ram4 : 5.07 3987.15 15.57 0.00 0.00 31668.26 1730.15 27472.69 00:32:48.595 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:48.595 Verification LBA range: start 0x1000 length 0x1000 00:32:48.595 crypto_ram4 : 5.07 3992.09 15.59 0.00 0.00 31625.13 1808.79 27472.69 00:32:48.595 =================================================================================================================== 00:32:48.595 Total : 17938.94 70.07 0.00 0.00 56599.33 1730.15 171127.60 00:32:48.595 00:32:48.595 real 0m8.182s 00:32:48.595 user 0m15.535s 00:32:48.595 sys 0m0.364s 00:32:48.595 19:17:03 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:48.595 19:17:03 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:48.595 ************************************ 00:32:48.595 END TEST bdev_verify 00:32:48.595 ************************************ 00:32:48.595 19:17:03 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:48.595 19:17:03 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:32:48.595 19:17:03 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:48.595 19:17:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:48.595 ************************************ 00:32:48.595 START TEST bdev_verify_big_io 00:32:48.595 ************************************ 00:32:48.595 19:17:03 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:48.595 [2024-06-10 19:17:03.201412] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:32:48.595 [2024-06-10 19:17:03.201466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845611 ] 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.0 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.1 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.2 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.3 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.4 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.5 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.6 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:01.7 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:02.0 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.595 EAL: Requested device 0000:b6:02.1 cannot be used 00:32:48.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b6:02.2 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b6:02.3 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b6:02.4 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b6:02.5 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b6:02.6 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b6:02.7 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.0 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.1 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.2 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.3 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.4 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.5 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.6 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:01.7 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.0 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.1 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.2 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.3 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.4 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.5 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.6 cannot be used 00:32:48.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:48.596 EAL: Requested device 0000:b8:02.7 cannot be used 00:32:48.596 [2024-06-10 19:17:03.341619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:48.854 [2024-06-10 19:17:03.456190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.854 [2024-06-10 19:17:03.456198] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.854 [2024-06-10 19:17:03.477653] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:48.854 [2024-06-10 19:17:03.485690] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:48.854 [2024-06-10 19:17:03.493704] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:48.854 [2024-06-10 19:17:03.599053] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:51.385 [2024-06-10 19:17:05.827671] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:51.385 [2024-06-10 19:17:05.827745] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:51.385 [2024-06-10 19:17:05.827759] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.385 [2024-06-10 19:17:05.835688] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:51.385 [2024-06-10 19:17:05.835706] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:51.385 [2024-06-10 19:17:05.835716] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.385 [2024-06-10 19:17:05.843711] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:51.385 [2024-06-10 19:17:05.843727] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:51.385 [2024-06-10 19:17:05.843737] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.385 [2024-06-10 19:17:05.851742] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:51.385 [2024-06-10 19:17:05.851759] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:51.385 [2024-06-10 19:17:05.851769] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.385 Running I/O for 5 seconds... 00:32:57.954 00:32:57.954 Latency(us) 00:32:57.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.954 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x0 length 0x100 00:32:57.954 crypto_ram : 5.70 44.94 2.81 0.00 0.00 2732222.05 124151.40 2308544.92 00:32:57.954 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x100 length 0x100 00:32:57.954 crypto_ram : 5.68 47.71 2.98 0.00 0.00 2600593.65 16882.07 2228014.28 00:32:57.954 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x0 length 0x100 00:32:57.954 crypto_ram2 : 5.71 47.47 2.97 0.00 0.00 2523630.16 6658.46 2214592.51 00:32:57.954 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x100 length 0x100 00:32:57.954 crypto_ram2 : 5.68 47.69 2.98 0.00 0.00 2507723.48 16252.93 2228014.28 00:32:57.954 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x0 length 0x100 00:32:57.954 crypto_ram3 : 5.56 323.82 20.24 0.00 0.00 352704.67 27682.41 510027.37 00:32:57.954 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x100 length 0x100 00:32:57.954 crypto_ram3 : 5.55 336.38 21.02 0.00 0.00 342030.22 42781.90 499961.04 00:32:57.954 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x0 length 0x100 00:32:57.954 crypto_ram4 : 5.68 346.12 21.63 0.00 0.00 320961.16 5242.88 476472.93 00:32:57.954 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:57.954 Verification LBA range: start 0x100 length 0x100 00:32:57.954 crypto_ram4 : 5.63 352.63 22.04 0.00 0.00 317991.79 10905.19 483183.82 00:32:57.954 =================================================================================================================== 00:32:57.954 Total : 1546.78 96.67 0.00 0.00 610430.29 5242.88 2308544.92 00:32:57.954 00:32:57.954 real 0m8.897s 00:32:57.954 user 0m16.895s 00:32:57.954 sys 0m0.393s 00:32:57.954 19:17:12 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:57.954 19:17:12 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:57.954 ************************************ 00:32:57.954 END TEST bdev_verify_big_io 00:32:57.954 ************************************ 00:32:57.954 19:17:12 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:57.954 19:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:32:57.954 19:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:57.954 19:17:12 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:57.954 ************************************ 00:32:57.954 START TEST bdev_write_zeroes 00:32:57.954 ************************************ 00:32:57.954 19:17:12 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:57.954 [2024-06-10 19:17:12.172915] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:32:57.954 [2024-06-10 19:17:12.172967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847112 ] 00:32:57.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.954 EAL: Requested device 0000:b6:01.0 cannot be used 00:32:57.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.954 EAL: Requested device 0000:b6:01.1 cannot be used 00:32:57.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.954 EAL: Requested device 0000:b6:01.2 cannot be used 00:32:57.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.954 EAL: Requested device 0000:b6:01.3 cannot be used 00:32:57.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:01.4 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:01.5 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:01.6 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:01.7 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.0 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.1 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.2 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.3 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.4 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.5 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.6 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b6:02.7 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.0 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.1 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.2 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.3 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.4 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.5 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.6 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:01.7 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.0 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.1 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.2 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.3 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.4 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.5 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.6 cannot be used 00:32:57.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.955 EAL: Requested device 0000:b8:02.7 cannot be used 00:32:57.955 [2024-06-10 19:17:12.304956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.955 [2024-06-10 19:17:12.387629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.955 [2024-06-10 19:17:12.408806] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:57.955 [2024-06-10 19:17:12.416831] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:57.955 [2024-06-10 19:17:12.424844] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:57.955 [2024-06-10 19:17:12.531069] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:33:00.491 [2024-06-10 19:17:14.731294] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:33:00.491 [2024-06-10 19:17:14.731357] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:00.491 [2024-06-10 19:17:14.731371] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:00.491 [2024-06-10 19:17:14.739314] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:33:00.491 [2024-06-10 19:17:14.739332] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:00.491 [2024-06-10 19:17:14.739343] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:00.491 [2024-06-10 19:17:14.747333] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:33:00.491 [2024-06-10 19:17:14.747349] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:00.491 [2024-06-10 19:17:14.747360] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:00.491 [2024-06-10 19:17:14.755353] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:33:00.491 [2024-06-10 19:17:14.755369] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:00.491 [2024-06-10 19:17:14.755380] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:00.491 Running I/O for 1 seconds... 00:33:01.427 00:33:01.427 Latency(us) 00:33:01.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.427 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:01.427 crypto_ram : 1.02 2148.25 8.39 0.00 0.00 59193.34 4980.74 70883.74 00:33:01.427 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:01.427 crypto_ram2 : 1.02 2153.98 8.41 0.00 0.00 58739.18 4954.52 65850.57 00:33:01.427 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:01.427 crypto_ram3 : 1.02 16570.77 64.73 0.00 0.00 7618.05 2267.55 9856.61 00:33:01.427 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:01.427 crypto_ram4 : 1.02 16555.31 64.67 0.00 0.00 7593.60 2254.44 7969.18 00:33:01.427 =================================================================================================================== 00:33:01.427 Total : 37428.31 146.20 0.00 0.00 13532.19 2254.44 70883.74 00:33:01.686 00:33:01.686 real 0m4.108s 00:33:01.686 user 0m3.723s 00:33:01.686 sys 0m0.343s 00:33:01.686 19:17:16 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:01.686 19:17:16 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:33:01.686 ************************************ 00:33:01.686 END TEST bdev_write_zeroes 00:33:01.686 ************************************ 00:33:01.686 19:17:16 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:01.686 19:17:16 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:33:01.686 19:17:16 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:01.686 19:17:16 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:33:01.686 ************************************ 00:33:01.686 START TEST bdev_json_nonenclosed 00:33:01.686 ************************************ 00:33:01.686 19:17:16 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:01.686 [2024-06-10 19:17:16.357620] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:01.686 [2024-06-10 19:17:16.357673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847838 ] 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.686 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:01.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.687 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:01.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.687 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:01.945 [2024-06-10 19:17:16.490247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.945 [2024-06-10 19:17:16.574027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.945 [2024-06-10 19:17:16.574093] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:01.945 [2024-06-10 19:17:16.574112] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:01.945 [2024-06-10 19:17:16.574125] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:01.945 00:33:01.945 real 0m0.358s 00:33:01.945 user 0m0.199s 00:33:01.945 sys 0m0.157s 00:33:01.945 19:17:16 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:01.945 19:17:16 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:33:01.945 ************************************ 00:33:01.945 END TEST bdev_json_nonenclosed 00:33:01.945 ************************************ 00:33:01.945 19:17:16 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:01.945 19:17:16 blockdev_crypto_aesni -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:33:01.945 19:17:16 blockdev_crypto_aesni -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:01.945 19:17:16 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:33:02.204 ************************************ 00:33:02.204 START TEST bdev_json_nonarray 00:33:02.204 ************************************ 00:33:02.204 19:17:16 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:02.204 [2024-06-10 19:17:16.792101] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:02.204 [2024-06-10 19:17:16.792155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847942 ] 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:02.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.204 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:02.204 [2024-06-10 19:17:16.914072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.463 [2024-06-10 19:17:17.001854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.463 [2024-06-10 19:17:17.001921] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:02.463 [2024-06-10 19:17:17.001940] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:02.463 [2024-06-10 19:17:17.001952] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:02.463 00:33:02.463 real 0m0.352s 00:33:02.463 user 0m0.212s 00:33:02.463 sys 0m0.137s 00:33:02.463 19:17:17 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:02.463 19:17:17 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:33:02.463 ************************************ 00:33:02.463 END TEST bdev_json_nonarray 00:33:02.463 ************************************ 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:33:02.463 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:33:02.464 19:17:17 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:33:02.464 00:33:02.464 real 1m10.679s 00:33:02.464 user 2m54.064s 00:33:02.464 sys 0m8.646s 00:33:02.464 19:17:17 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:02.464 19:17:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:33:02.464 ************************************ 00:33:02.464 END TEST blockdev_crypto_aesni 00:33:02.464 ************************************ 00:33:02.464 19:17:17 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:33:02.464 19:17:17 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:02.464 19:17:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:02.464 19:17:17 -- common/autotest_common.sh@10 -- # set +x 00:33:02.722 ************************************ 00:33:02.722 START TEST blockdev_crypto_sw 00:33:02.722 ************************************ 00:33:02.722 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:33:02.722 * Looking for test storage... 00:33:02.722 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:02.722 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1848012 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:33:02.723 19:17:17 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 1848012 00:33:02.723 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@830 -- # '[' -z 1848012 ']' 00:33:02.723 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.723 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:02.723 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.723 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:02.723 19:17:17 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:02.723 [2024-06-10 19:17:17.414857] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:02.723 [2024-06-10 19:17:17.414921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848012 ] 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:02.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:02.982 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:02.982 [2024-06-10 19:17:17.550085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.982 [2024-06-10 19:17:17.637992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.548 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:03.548 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@863 -- # return 0 00:33:03.548 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:03.548 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:33:03.548 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:33:03.548 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.548 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:03.805 Malloc0 00:33:03.805 Malloc1 00:33:03.805 true 00:33:03.805 true 00:33:03.805 true 00:33:03.805 [2024-06-10 19:17:18.553560] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:03.805 crypto_ram 00:33:03.805 [2024-06-10 19:17:18.561596] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:04.063 crypto_ram2 00:33:04.063 [2024-06-10 19:17:18.569615] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:04.063 crypto_ram3 00:33:04.063 [ 00:33:04.063 { 00:33:04.063 "name": "Malloc1", 00:33:04.063 "aliases": [ 00:33:04.063 "77f42c32-71b0-4796-a987-e4a04ef831b1" 00:33:04.063 ], 00:33:04.063 "product_name": "Malloc disk", 00:33:04.063 "block_size": 4096, 00:33:04.063 "num_blocks": 4096, 00:33:04.063 "uuid": "77f42c32-71b0-4796-a987-e4a04ef831b1", 00:33:04.063 "assigned_rate_limits": { 00:33:04.063 "rw_ios_per_sec": 0, 00:33:04.063 "rw_mbytes_per_sec": 0, 00:33:04.063 "r_mbytes_per_sec": 0, 00:33:04.063 "w_mbytes_per_sec": 0 00:33:04.063 }, 00:33:04.063 "claimed": true, 00:33:04.063 "claim_type": "exclusive_write", 00:33:04.063 "zoned": false, 00:33:04.063 "supported_io_types": { 00:33:04.063 "read": true, 00:33:04.063 "write": true, 00:33:04.063 "unmap": true, 00:33:04.063 "write_zeroes": true, 00:33:04.063 "flush": true, 00:33:04.063 "reset": true, 00:33:04.063 "compare": false, 00:33:04.063 "compare_and_write": false, 00:33:04.063 "abort": true, 00:33:04.063 "nvme_admin": false, 00:33:04.063 "nvme_io": false 00:33:04.063 }, 00:33:04.063 "memory_domains": [ 00:33:04.063 { 00:33:04.063 "dma_device_id": "system", 00:33:04.063 "dma_device_type": 1 00:33:04.063 }, 00:33:04.063 { 00:33:04.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.063 "dma_device_type": 2 00:33:04.063 } 00:33:04.063 ], 00:33:04.063 "driver_specific": {} 00:33:04.063 } 00:33:04.063 ] 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.063 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:33:04.063 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "80cf73fd-cc6c-5a46-b8d9-6c2de846c168"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "80cf73fd-cc6c-5a46-b8d9-6c2de846c168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "8f34daef-85fc-5d36-a101-0abec1441cd6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "8f34daef-85fc-5d36-a101-0abec1441cd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:33:04.064 19:17:18 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 1848012 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@949 -- # '[' -z 1848012 ']' 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # kill -0 1848012 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # uname 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1848012 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1848012' 00:33:04.064 killing process with pid 1848012 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # kill 1848012 00:33:04.064 19:17:18 blockdev_crypto_sw -- common/autotest_common.sh@973 -- # wait 1848012 00:33:04.631 19:17:19 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:04.631 19:17:19 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:33:04.631 19:17:19 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:33:04.631 19:17:19 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:04.631 19:17:19 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:04.631 ************************************ 00:33:04.631 START TEST bdev_hello_world 00:33:04.631 ************************************ 00:33:04.631 19:17:19 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:33:04.631 [2024-06-10 19:17:19.252531] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:04.631 [2024-06-10 19:17:19.252609] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848310 ] 00:33:04.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.631 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:04.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.631 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:04.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:04.632 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:04.632 [2024-06-10 19:17:19.387543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.891 [2024-06-10 19:17:19.471489] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.891 [2024-06-10 19:17:19.635299] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:04.891 [2024-06-10 19:17:19.635366] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:04.891 [2024-06-10 19:17:19.635380] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:04.891 [2024-06-10 19:17:19.643317] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:04.891 [2024-06-10 19:17:19.643334] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:04.891 [2024-06-10 19:17:19.643345] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.149 [2024-06-10 19:17:19.651337] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:05.149 [2024-06-10 19:17:19.651355] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:05.149 [2024-06-10 19:17:19.651365] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.149 [2024-06-10 19:17:19.691347] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:05.149 [2024-06-10 19:17:19.691379] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:33:05.149 [2024-06-10 19:17:19.691397] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:05.149 [2024-06-10 19:17:19.693307] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:05.149 [2024-06-10 19:17:19.693388] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:05.149 [2024-06-10 19:17:19.693403] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:05.149 [2024-06-10 19:17:19.693436] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:05.149 00:33:05.149 [2024-06-10 19:17:19.693452] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:05.149 00:33:05.149 real 0m0.691s 00:33:05.149 user 0m0.452s 00:33:05.149 sys 0m0.223s 00:33:05.149 19:17:19 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:05.150 19:17:19 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:05.150 ************************************ 00:33:05.150 END TEST bdev_hello_world 00:33:05.150 ************************************ 00:33:05.408 19:17:19 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:33:05.408 19:17:19 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:05.408 19:17:19 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:05.408 19:17:19 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:05.408 ************************************ 00:33:05.408 START TEST bdev_bounds 00:33:05.408 ************************************ 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1848578 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1848578' 00:33:05.408 Process bdevio pid: 1848578 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1848578 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 1848578 ']' 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:05.408 19:17:19 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:05.408 [2024-06-10 19:17:20.025617] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:05.408 [2024-06-10 19:17:20.025677] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848578 ] 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.408 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:05.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:05.409 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:05.409 [2024-06-10 19:17:20.164285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:05.666 [2024-06-10 19:17:20.253680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.666 [2024-06-10 19:17:20.253708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.666 [2024-06-10 19:17:20.253711] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.666 [2024-06-10 19:17:20.421068] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:05.666 [2024-06-10 19:17:20.421136] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:05.666 [2024-06-10 19:17:20.421150] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.923 [2024-06-10 19:17:20.429086] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:05.923 [2024-06-10 19:17:20.429104] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:05.923 [2024-06-10 19:17:20.429115] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:05.923 [2024-06-10 19:17:20.437107] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:05.923 [2024-06-10 19:17:20.437123] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:05.923 [2024-06-10 19:17:20.437134] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:06.180 19:17:20 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:06.180 19:17:20 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:33:06.180 19:17:20 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:06.437 I/O targets: 00:33:06.437 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:33:06.437 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:33:06.437 00:33:06.437 00:33:06.437 CUnit - A unit testing framework for C - Version 2.1-3 00:33:06.437 http://cunit.sourceforge.net/ 00:33:06.437 00:33:06.437 00:33:06.437 Suite: bdevio tests on: crypto_ram3 00:33:06.437 Test: blockdev write read block ...passed 00:33:06.437 Test: blockdev write zeroes read block ...passed 00:33:06.437 Test: blockdev write zeroes read no split ...passed 00:33:06.437 Test: blockdev write zeroes read split ...passed 00:33:06.437 Test: blockdev write zeroes read split partial ...passed 00:33:06.437 Test: blockdev reset ...passed 00:33:06.437 Test: blockdev write read 8 blocks ...passed 00:33:06.437 Test: blockdev write read size > 128k ...passed 00:33:06.437 Test: blockdev write read invalid size ...passed 00:33:06.437 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:06.437 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:06.437 Test: blockdev write read max offset ...passed 00:33:06.437 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:06.437 Test: blockdev writev readv 8 blocks ...passed 00:33:06.437 Test: blockdev writev readv 30 x 1block ...passed 00:33:06.437 Test: blockdev writev readv block ...passed 00:33:06.437 Test: blockdev writev readv size > 128k ...passed 00:33:06.437 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:06.437 Test: blockdev comparev and writev ...passed 00:33:06.437 Test: blockdev nvme passthru rw ...passed 00:33:06.437 Test: blockdev nvme passthru vendor specific ...passed 00:33:06.437 Test: blockdev nvme admin passthru ...passed 00:33:06.437 Test: blockdev copy ...passed 00:33:06.437 Suite: bdevio tests on: crypto_ram 00:33:06.437 Test: blockdev write read block ...passed 00:33:06.437 Test: blockdev write zeroes read block ...passed 00:33:06.437 Test: blockdev write zeroes read no split ...passed 00:33:06.437 Test: blockdev write zeroes read split ...passed 00:33:06.437 Test: blockdev write zeroes read split partial ...passed 00:33:06.437 Test: blockdev reset ...passed 00:33:06.437 Test: blockdev write read 8 blocks ...passed 00:33:06.437 Test: blockdev write read size > 128k ...passed 00:33:06.437 Test: blockdev write read invalid size ...passed 00:33:06.437 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:06.437 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:06.437 Test: blockdev write read max offset ...passed 00:33:06.437 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:06.437 Test: blockdev writev readv 8 blocks ...passed 00:33:06.437 Test: blockdev writev readv 30 x 1block ...passed 00:33:06.437 Test: blockdev writev readv block ...passed 00:33:06.437 Test: blockdev writev readv size > 128k ...passed 00:33:06.437 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:06.437 Test: blockdev comparev and writev ...passed 00:33:06.437 Test: blockdev nvme passthru rw ...passed 00:33:06.437 Test: blockdev nvme passthru vendor specific ...passed 00:33:06.437 Test: blockdev nvme admin passthru ...passed 00:33:06.437 Test: blockdev copy ...passed 00:33:06.437 00:33:06.437 Run Summary: Type Total Ran Passed Failed Inactive 00:33:06.437 suites 2 2 n/a 0 0 00:33:06.437 tests 46 46 46 0 0 00:33:06.437 asserts 260 260 260 0 n/a 00:33:06.437 00:33:06.437 Elapsed time = 0.081 seconds 00:33:06.437 0 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1848578 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 1848578 ']' 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 1848578 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1848578 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1848578' 00:33:06.437 killing process with pid 1848578 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # kill 1848578 00:33:06.437 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@973 -- # wait 1848578 00:33:06.695 19:17:21 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:33:06.695 00:33:06.695 real 0m1.362s 00:33:06.696 user 0m3.497s 00:33:06.696 sys 0m0.385s 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:06.696 ************************************ 00:33:06.696 END TEST bdev_bounds 00:33:06.696 ************************************ 00:33:06.696 19:17:21 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:33:06.696 19:17:21 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:33:06.696 19:17:21 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:06.696 19:17:21 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:06.696 ************************************ 00:33:06.696 START TEST bdev_nbd 00:33:06.696 ************************************ 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1848872 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1848872 /var/tmp/spdk-nbd.sock 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 1848872 ']' 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:06.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:06.696 19:17:21 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:06.954 [2024-06-10 19:17:21.479784] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:06.954 [2024-06-10 19:17:21.479831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.954 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:06.954 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:06.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:06.955 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:06.955 [2024-06-10 19:17:21.600923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.955 [2024-06-10 19:17:21.682332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.215 [2024-06-10 19:17:21.841375] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:07.215 [2024-06-10 19:17:21.841442] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:07.215 [2024-06-10 19:17:21.841456] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:07.215 [2024-06-10 19:17:21.849392] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:07.215 [2024-06-10 19:17:21.849409] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:07.215 [2024-06-10 19:17:21.849420] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:07.215 [2024-06-10 19:17:21.857413] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:07.215 [2024-06-10 19:17:21.857433] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:07.215 [2024-06-10 19:17:21.857443] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:07.781 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:08.260 1+0 records in 00:33:08.260 1+0 records out 00:33:08.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266485 s, 15.4 MB/s 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:08.260 1+0 records in 00:33:08.260 1+0 records out 00:33:08.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321193 s, 12.8 MB/s 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:08.260 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:08.261 19:17:22 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:33:08.261 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:08.261 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:08.261 19:17:22 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:08.598 { 00:33:08.598 "nbd_device": "/dev/nbd0", 00:33:08.598 "bdev_name": "crypto_ram" 00:33:08.598 }, 00:33:08.598 { 00:33:08.598 "nbd_device": "/dev/nbd1", 00:33:08.598 "bdev_name": "crypto_ram3" 00:33:08.598 } 00:33:08.598 ]' 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:08.598 { 00:33:08.598 "nbd_device": "/dev/nbd0", 00:33:08.598 "bdev_name": "crypto_ram" 00:33:08.598 }, 00:33:08.598 { 00:33:08.598 "nbd_device": "/dev/nbd1", 00:33:08.598 "bdev_name": "crypto_ram3" 00:33:08.598 } 00:33:08.598 ]' 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:08.598 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:08.874 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.136 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:09.393 19:17:23 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:33:09.650 /dev/nbd0 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:09.650 1+0 records in 00:33:09.650 1+0 records out 00:33:09.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261723 s, 15.7 MB/s 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:09.650 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:33:09.908 /dev/nbd1 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:09.908 1+0 records in 00:33:09.908 1+0 records out 00:33:09.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334052 s, 12.3 MB/s 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.908 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:10.165 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:10.165 { 00:33:10.165 "nbd_device": "/dev/nbd0", 00:33:10.165 "bdev_name": "crypto_ram" 00:33:10.165 }, 00:33:10.165 { 00:33:10.165 "nbd_device": "/dev/nbd1", 00:33:10.165 "bdev_name": "crypto_ram3" 00:33:10.165 } 00:33:10.165 ]' 00:33:10.165 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:10.165 { 00:33:10.165 "nbd_device": "/dev/nbd0", 00:33:10.165 "bdev_name": "crypto_ram" 00:33:10.165 }, 00:33:10.165 { 00:33:10.165 "nbd_device": "/dev/nbd1", 00:33:10.165 "bdev_name": "crypto_ram3" 00:33:10.166 } 00:33:10.166 ]' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:10.166 /dev/nbd1' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:10.166 /dev/nbd1' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:10.166 256+0 records in 00:33:10.166 256+0 records out 00:33:10.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114082 s, 91.9 MB/s 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:10.166 256+0 records in 00:33:10.166 256+0 records out 00:33:10.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309695 s, 33.9 MB/s 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:10.166 256+0 records in 00:33:10.166 256+0 records out 00:33:10.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0377862 s, 27.8 MB/s 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:10.166 19:17:24 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:10.424 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:10.681 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:10.938 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:10.939 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:11.196 malloc_lvol_verify 00:33:11.196 19:17:25 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:11.453 bd0b381b-a7bb-499a-a8d3-c6301ad6533f 00:33:11.453 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:11.710 44219ff5-c4fb-4ec5-940d-70ebb0604b36 00:33:11.710 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:11.967 /dev/nbd0 00:33:11.967 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:11.967 mke2fs 1.46.5 (30-Dec-2021) 00:33:11.967 Discarding device blocks: 0/4096 done 00:33:11.967 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:11.967 00:33:11.967 Allocating group tables: 0/1 done 00:33:11.967 Writing inode tables: 0/1 done 00:33:11.968 Creating journal (1024 blocks): done 00:33:11.968 Writing superblocks and filesystem accounting information: 0/1 done 00:33:11.968 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:11.968 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1848872 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 1848872 ']' 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 1848872 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1848872 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1848872' 00:33:12.226 killing process with pid 1848872 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # kill 1848872 00:33:12.226 19:17:26 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@973 -- # wait 1848872 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:12.486 00:33:12.486 real 0m5.644s 00:33:12.486 user 0m8.036s 00:33:12.486 sys 0m2.238s 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:12.486 ************************************ 00:33:12.486 END TEST bdev_nbd 00:33:12.486 ************************************ 00:33:12.486 19:17:27 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:12.486 19:17:27 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:33:12.486 19:17:27 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:33:12.486 19:17:27 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:33:12.486 19:17:27 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:12.486 19:17:27 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:12.486 19:17:27 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:12.486 ************************************ 00:33:12.486 START TEST bdev_fio 00:33:12.486 ************************************ 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:12.486 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:12.486 19:17:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:12.745 ************************************ 00:33:12.745 START TEST bdev_fio_rw_verify 00:33:12.745 ************************************ 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.745 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:12.746 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:12.746 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:12.746 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:12.746 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:12.746 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:12.746 19:17:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:13.003 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:13.003 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:13.003 fio-3.35 00:33:13.003 Starting 2 threads 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:13.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.261 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:25.455 00:33:25.455 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1850214: Mon Jun 10 19:17:38 2024 00:33:25.455 read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(930MiB/10001msec) 00:33:25.455 slat (usec): min=13, max=158, avg=18.36, stdev= 3.31 00:33:25.455 clat (usec): min=6, max=736, avg=133.81, stdev=53.25 00:33:25.455 lat (usec): min=24, max=858, avg=152.17, stdev=54.56 00:33:25.455 clat percentiles (usec): 00:33:25.455 | 50.000th=[ 131], 99.000th=[ 255], 99.900th=[ 273], 99.990th=[ 343], 00:33:25.455 | 99.999th=[ 635] 00:33:25.455 write: IOPS=28.6k, BW=112MiB/s (117MB/s)(1059MiB/9473msec); 0 zone resets 00:33:25.455 slat (usec): min=13, max=434, avg=30.92, stdev= 3.95 00:33:25.455 clat (usec): min=23, max=1837, avg=179.24, stdev=82.05 00:33:25.455 lat (usec): min=47, max=1871, avg=210.16, stdev=83.50 00:33:25.455 clat percentiles (usec): 00:33:25.455 | 50.000th=[ 174], 99.000th=[ 355], 99.900th=[ 375], 99.990th=[ 586], 00:33:25.455 | 99.999th=[ 1778] 00:33:25.455 bw ( KiB/s): min=101504, max=115096, per=94.75%, avg=108493.89, stdev=2045.58, samples=38 00:33:25.455 iops : min=25376, max=28774, avg=27123.47, stdev=511.39, samples=38 00:33:25.455 lat (usec) : 10=0.01%, 20=0.01%, 50=5.43%, 100=18.65%, 250=64.02% 00:33:25.455 lat (usec) : 500=11.88%, 750=0.01%, 1000=0.01% 00:33:25.455 lat (msec) : 2=0.01% 00:33:25.455 cpu : usr=99.64%, sys=0.00%, ctx=32, majf=0, minf=472 00:33:25.455 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.455 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.455 issued rwts: total=237991,271187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.455 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:25.455 00:33:25.455 Run status group 0 (all jobs): 00:33:25.455 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=930MiB (975MB), run=10001-10001msec 00:33:25.455 WRITE: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=1059MiB (1111MB), run=9473-9473msec 00:33:25.455 00:33:25.455 real 0m11.152s 00:33:25.455 user 0m31.630s 00:33:25.455 sys 0m0.379s 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:25.455 ************************************ 00:33:25.455 END TEST bdev_fio_rw_verify 00:33:25.455 ************************************ 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "80cf73fd-cc6c-5a46-b8d9-6c2de846c168"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "80cf73fd-cc6c-5a46-b8d9-6c2de846c168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "8f34daef-85fc-5d36-a101-0abec1441cd6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "8f34daef-85fc-5d36-a101-0abec1441cd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:33:25.455 crypto_ram3 ]] 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "80cf73fd-cc6c-5a46-b8d9-6c2de846c168"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "80cf73fd-cc6c-5a46-b8d9-6c2de846c168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "8f34daef-85fc-5d36-a101-0abec1441cd6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "8f34daef-85fc-5d36-a101-0abec1441cd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:25.455 19:17:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:25.455 ************************************ 00:33:25.455 START TEST bdev_fio_trim 00:33:25.455 ************************************ 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # shift 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libasan 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:25.456 19:17:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:25.456 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:25.456 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:25.456 fio-3.35 00:33:25.456 Starting 2 threads 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:25.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:25.456 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:35.426 00:33:35.426 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1852222: Mon Jun 10 19:17:49 2024 00:33:35.426 write: IOPS=42.7k, BW=167MiB/s (175MB/s)(1669MiB/10001msec); 0 zone resets 00:33:35.426 slat (usec): min=12, max=1540, avg=20.40, stdev= 4.54 00:33:35.426 clat (usec): min=13, max=1701, avg=154.29, stdev=85.33 00:33:35.426 lat (usec): min=30, max=1720, avg=174.69, stdev=88.32 00:33:35.426 clat percentiles (usec): 00:33:35.426 | 50.000th=[ 123], 99.000th=[ 318], 99.900th=[ 338], 99.990th=[ 486], 00:33:35.426 | 99.999th=[ 742] 00:33:35.426 bw ( KiB/s): min=168496, max=172032, per=100.00%, avg=171037.47, stdev=420.84, samples=38 00:33:35.426 iops : min=42124, max=43008, avg=42759.37, stdev=105.21, samples=38 00:33:35.426 trim: IOPS=42.7k, BW=167MiB/s (175MB/s)(1669MiB/10001msec); 0 zone resets 00:33:35.426 slat (nsec): min=5265, max=63338, avg=9235.31, stdev=2154.07 00:33:35.426 clat (usec): min=30, max=1720, avg=103.07, stdev=30.80 00:33:35.426 lat (usec): min=38, max=1728, avg=112.31, stdev=30.99 00:33:35.426 clat percentiles (usec): 00:33:35.426 | 50.000th=[ 104], 99.000th=[ 167], 99.900th=[ 178], 99.990th=[ 258], 00:33:35.426 | 99.999th=[ 502] 00:33:35.426 bw ( KiB/s): min=168520, max=172032, per=100.00%, avg=171038.74, stdev=419.21, samples=38 00:33:35.426 iops : min=42130, max=43008, avg=42759.68, stdev=104.80, samples=38 00:33:35.426 lat (usec) : 20=0.01%, 50=4.44%, 100=37.26%, 250=48.58%, 500=9.72% 00:33:35.426 lat (usec) : 750=0.01%, 1000=0.01% 00:33:35.426 lat (msec) : 2=0.01% 00:33:35.426 cpu : usr=99.64%, sys=0.00%, ctx=26, majf=0, minf=338 00:33:35.426 IO depths : 1=7.5%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:35.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.426 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.426 issued rwts: total=0,427343,427344,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:35.426 00:33:35.426 Run status group 0 (all jobs): 00:33:35.426 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1669MiB (1750MB), run=10001-10001msec 00:33:35.426 TRIM: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1669MiB (1750MB), run=10001-10001msec 00:33:35.426 00:33:35.426 real 0m11.144s 00:33:35.426 user 0m30.843s 00:33:35.426 sys 0m0.408s 00:33:35.426 19:17:49 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:35.426 19:17:49 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:33:35.426 ************************************ 00:33:35.426 END TEST bdev_fio_trim 00:33:35.426 ************************************ 00:33:35.426 19:17:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:33:35.426 19:17:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:35.426 19:17:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:33:35.426 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:35.426 19:17:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:33:35.426 00:33:35.426 real 0m22.654s 00:33:35.426 user 1m2.653s 00:33:35.427 sys 0m0.986s 00:33:35.427 19:17:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:35.427 19:17:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:35.427 ************************************ 00:33:35.427 END TEST bdev_fio 00:33:35.427 ************************************ 00:33:35.427 19:17:49 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:35.427 19:17:49 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:35.427 19:17:49 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:33:35.427 19:17:49 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:35.427 19:17:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:35.427 ************************************ 00:33:35.427 START TEST bdev_verify 00:33:35.427 ************************************ 00:33:35.427 19:17:49 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:35.427 [2024-06-10 19:17:49.935108] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:35.427 [2024-06-10 19:17:49.935162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1853855 ] 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:35.427 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:35.427 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:35.427 [2024-06-10 19:17:50.071270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:35.427 [2024-06-10 19:17:50.157348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.427 [2024-06-10 19:17:50.157353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.685 [2024-06-10 19:17:50.317686] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:35.685 [2024-06-10 19:17:50.317759] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:35.685 [2024-06-10 19:17:50.317772] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:35.685 [2024-06-10 19:17:50.325705] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:35.685 [2024-06-10 19:17:50.325722] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:35.685 [2024-06-10 19:17:50.325733] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:35.685 [2024-06-10 19:17:50.333729] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:35.685 [2024-06-10 19:17:50.333746] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:35.685 [2024-06-10 19:17:50.333756] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:35.685 Running I/O for 5 seconds... 00:33:40.949 00:33:40.949 Latency(us) 00:33:40.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.949 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:40.949 Verification LBA range: start 0x0 length 0x800 00:33:40.949 crypto_ram : 5.03 6749.27 26.36 0.00 0.00 18894.15 1454.90 21076.38 00:33:40.949 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:40.949 Verification LBA range: start 0x800 length 0x800 00:33:40.949 crypto_ram : 5.02 6751.10 26.37 0.00 0.00 18891.84 1717.04 20866.66 00:33:40.949 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:40.949 Verification LBA range: start 0x0 length 0x800 00:33:40.949 crypto_ram3 : 5.03 3382.78 13.21 0.00 0.00 37653.26 1579.42 25060.97 00:33:40.949 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:40.949 Verification LBA range: start 0x800 length 0x800 00:33:40.949 crypto_ram3 : 5.03 3383.66 13.22 0.00 0.00 37640.07 1926.76 25165.82 00:33:40.949 =================================================================================================================== 00:33:40.949 Total : 20266.80 79.17 0.00 0.00 25159.92 1454.90 25165.82 00:33:40.949 00:33:40.949 real 0m5.756s 00:33:40.949 user 0m10.848s 00:33:40.949 sys 0m0.233s 00:33:40.949 19:17:55 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:40.949 19:17:55 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:33:40.949 ************************************ 00:33:40.949 END TEST bdev_verify 00:33:40.949 ************************************ 00:33:40.949 19:17:55 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:40.949 19:17:55 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:33:40.949 19:17:55 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:40.949 19:17:55 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:41.207 ************************************ 00:33:41.207 START TEST bdev_verify_big_io 00:33:41.207 ************************************ 00:33:41.207 19:17:55 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:41.207 [2024-06-10 19:17:55.764792] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:41.207 [2024-06-10 19:17:55.764844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854930 ] 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:41.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.207 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:41.207 [2024-06-10 19:17:55.896461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:41.465 [2024-06-10 19:17:55.980999] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.465 [2024-06-10 19:17:55.981005] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.465 [2024-06-10 19:17:56.141002] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:41.465 [2024-06-10 19:17:56.141062] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:41.465 [2024-06-10 19:17:56.141083] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:41.465 [2024-06-10 19:17:56.149023] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:41.465 [2024-06-10 19:17:56.149040] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:41.465 [2024-06-10 19:17:56.149050] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:41.465 [2024-06-10 19:17:56.157043] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:41.465 [2024-06-10 19:17:56.157059] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:41.465 [2024-06-10 19:17:56.157069] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:41.465 Running I/O for 5 seconds... 00:33:48.019 00:33:48.019 Latency(us) 00:33:48.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.019 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:48.019 Verification LBA range: start 0x0 length 0x80 00:33:48.019 crypto_ram : 5.28 460.84 28.80 0.00 0.00 271557.70 5845.81 360710.14 00:33:48.019 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:48.019 Verification LBA range: start 0x80 length 0x80 00:33:48.019 crypto_ram : 5.29 459.65 28.73 0.00 0.00 272143.69 6553.60 362387.87 00:33:48.019 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:48.019 Verification LBA range: start 0x0 length 0x80 00:33:48.019 crypto_ram3 : 5.29 241.89 15.12 0.00 0.00 498460.81 5950.67 372454.20 00:33:48.019 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:48.019 Verification LBA range: start 0x80 length 0x80 00:33:48.019 crypto_ram3 : 5.30 241.35 15.08 0.00 0.00 499461.55 5059.38 372454.20 00:33:48.019 =================================================================================================================== 00:33:48.019 Total : 1403.73 87.73 0.00 0.00 350164.65 5059.38 372454.20 00:33:48.019 00:33:48.019 real 0m6.024s 00:33:48.019 user 0m11.396s 00:33:48.019 sys 0m0.231s 00:33:48.019 19:18:01 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:48.019 19:18:01 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:33:48.019 ************************************ 00:33:48.019 END TEST bdev_verify_big_io 00:33:48.019 ************************************ 00:33:48.019 19:18:01 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.019 19:18:01 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:33:48.019 19:18:01 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:48.019 19:18:01 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:48.019 ************************************ 00:33:48.019 START TEST bdev_write_zeroes 00:33:48.019 ************************************ 00:33:48.019 19:18:01 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.019 [2024-06-10 19:18:01.880748] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:48.019 [2024-06-10 19:18:01.880804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855983 ] 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:48.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.019 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:48.019 [2024-06-10 19:18:02.015149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.019 [2024-06-10 19:18:02.098851] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.019 [2024-06-10 19:18:02.271290] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:48.019 [2024-06-10 19:18:02.271352] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:48.019 [2024-06-10 19:18:02.271366] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:48.019 [2024-06-10 19:18:02.279309] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:33:48.019 [2024-06-10 19:18:02.279331] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:48.020 [2024-06-10 19:18:02.279341] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:48.020 [2024-06-10 19:18:02.287330] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:33:48.020 [2024-06-10 19:18:02.287346] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:33:48.020 [2024-06-10 19:18:02.287356] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:48.020 Running I/O for 1 seconds... 00:33:48.953 00:33:48.953 Latency(us) 00:33:48.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.953 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:48.953 crypto_ram : 1.01 29026.96 113.39 0.00 0.00 4400.56 1179.65 6160.38 00:33:48.953 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:48.953 crypto_ram3 : 1.01 14558.27 56.87 0.00 0.00 8741.98 1507.33 9175.04 00:33:48.953 =================================================================================================================== 00:33:48.953 Total : 43585.24 170.25 0.00 0.00 5856.14 1179.65 9175.04 00:33:48.953 00:33:48.953 real 0m1.722s 00:33:48.953 user 0m1.466s 00:33:48.953 sys 0m0.232s 00:33:48.953 19:18:03 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:48.953 19:18:03 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:33:48.953 ************************************ 00:33:48.953 END TEST bdev_write_zeroes 00:33:48.953 ************************************ 00:33:48.953 19:18:03 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.953 19:18:03 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:33:48.953 19:18:03 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:48.953 19:18:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:48.953 ************************************ 00:33:48.953 START TEST bdev_json_nonenclosed 00:33:48.953 ************************************ 00:33:48.953 19:18:03 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.953 [2024-06-10 19:18:03.692264] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:48.953 [2024-06-10 19:18:03.692324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856341 ] 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.209 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:49.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:49.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.210 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:49.210 [2024-06-10 19:18:03.825838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.210 [2024-06-10 19:18:03.908873] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.210 [2024-06-10 19:18:03.908939] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:49.210 [2024-06-10 19:18:03.908958] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:49.210 [2024-06-10 19:18:03.908970] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:49.467 00:33:49.467 real 0m0.362s 00:33:49.467 user 0m0.208s 00:33:49.467 sys 0m0.152s 00:33:49.467 19:18:03 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:49.467 19:18:03 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:33:49.467 ************************************ 00:33:49.467 END TEST bdev_json_nonenclosed 00:33:49.467 ************************************ 00:33:49.467 19:18:04 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:49.467 19:18:04 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:33:49.467 19:18:04 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:49.468 19:18:04 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:49.468 ************************************ 00:33:49.468 START TEST bdev_json_nonarray 00:33:49.468 ************************************ 00:33:49.468 19:18:04 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:49.468 [2024-06-10 19:18:04.136111] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:49.468 [2024-06-10 19:18:04.136167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856438 ] 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:49.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.468 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:49.728 [2024-06-10 19:18:04.269130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.728 [2024-06-10 19:18:04.353229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.728 [2024-06-10 19:18:04.353299] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:49.728 [2024-06-10 19:18:04.353319] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:49.728 [2024-06-10 19:18:04.353331] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:49.728 00:33:49.728 real 0m0.361s 00:33:49.728 user 0m0.216s 00:33:49.728 sys 0m0.143s 00:33:49.728 19:18:04 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:49.728 19:18:04 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:33:49.728 ************************************ 00:33:49.728 END TEST bdev_json_nonarray 00:33:49.728 ************************************ 00:33:49.987 19:18:04 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:33:49.987 19:18:04 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:33:49.987 19:18:04 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:33:49.987 19:18:04 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:33:49.987 19:18:04 blockdev_crypto_sw -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:49.987 19:18:04 blockdev_crypto_sw -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:49.987 19:18:04 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:49.987 ************************************ 00:33:49.987 START TEST bdev_crypto_enomem 00:33:49.987 ************************************ 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # bdev_crypto_enomem 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=1856554 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 1856554 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@830 -- # '[' -z 1856554 ']' 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:49.987 19:18:04 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:49.987 [2024-06-10 19:18:04.579250] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:49.987 [2024-06-10 19:18:04.579309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856554 ] 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:49.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:49.987 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:49.987 [2024-06-10 19:18:04.704287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.246 [2024-06-10 19:18:04.794813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@863 -- # return 0 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:50.812 true 00:33:50.812 base0 00:33:50.812 true 00:33:50.812 [2024-06-10 19:18:05.510970] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:50.812 crypt0 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_name=crypt0 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local i 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:50.812 [ 00:33:50.812 { 00:33:50.812 "name": "crypt0", 00:33:50.812 "aliases": [ 00:33:50.812 "9148116a-c63c-5076-980e-edd8e807256e" 00:33:50.812 ], 00:33:50.812 "product_name": "crypto", 00:33:50.812 "block_size": 512, 00:33:50.812 "num_blocks": 2097152, 00:33:50.812 "uuid": "9148116a-c63c-5076-980e-edd8e807256e", 00:33:50.812 "assigned_rate_limits": { 00:33:50.812 "rw_ios_per_sec": 0, 00:33:50.812 "rw_mbytes_per_sec": 0, 00:33:50.812 "r_mbytes_per_sec": 0, 00:33:50.812 "w_mbytes_per_sec": 0 00:33:50.812 }, 00:33:50.812 "claimed": false, 00:33:50.812 "zoned": false, 00:33:50.812 "supported_io_types": { 00:33:50.812 "read": true, 00:33:50.812 "write": true, 00:33:50.812 "unmap": false, 00:33:50.812 "write_zeroes": true, 00:33:50.812 "flush": false, 00:33:50.812 "reset": true, 00:33:50.812 "compare": false, 00:33:50.812 "compare_and_write": false, 00:33:50.812 "abort": false, 00:33:50.812 "nvme_admin": false, 00:33:50.812 "nvme_io": false 00:33:50.812 }, 00:33:50.812 "memory_domains": [ 00:33:50.812 { 00:33:50.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.812 "dma_device_type": 2 00:33:50.812 } 00:33:50.812 ], 00:33:50.812 "driver_specific": { 00:33:50.812 "crypto": { 00:33:50.812 "base_bdev_name": "EE_base0", 00:33:50.812 "name": "crypt0", 00:33:50.812 "key_name": "test_dek_sw" 00:33:50.812 } 00:33:50.812 } 00:33:50.812 } 00:33:50.812 ] 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # return 0 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=1857034 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:33:50.812 19:18:05 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:51.068 Running I/O for 5 seconds... 00:33:52.003 19:18:06 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:33:52.003 19:18:06 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.003 19:18:06 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:52.003 19:18:06 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.003 19:18:06 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 1857034 00:33:56.186 00:33:56.186 Latency(us) 00:33:56.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.186 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:33:56.186 crypt0 : 5.00 39517.97 154.37 0.00 0.00 806.35 380.11 1507.33 00:33:56.186 =================================================================================================================== 00:33:56.186 Total : 39517.97 154.37 0.00 0.00 806.35 380.11 1507.33 00:33:56.186 0 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 1856554 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@949 -- # '[' -z 1856554 ']' 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # kill -0 1856554 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # uname 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1856554 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1856554' 00:33:56.186 killing process with pid 1856554 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # kill 1856554 00:33:56.186 Received shutdown signal, test time was about 5.000000 seconds 00:33:56.186 00:33:56.186 Latency(us) 00:33:56.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.186 =================================================================================================================== 00:33:56.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@973 -- # wait 1856554 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:33:56.186 00:33:56.186 real 0m6.405s 00:33:56.186 user 0m6.646s 00:33:56.186 sys 0m0.368s 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:56.186 19:18:10 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:56.186 ************************************ 00:33:56.186 END TEST bdev_crypto_enomem 00:33:56.186 ************************************ 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:33:56.444 19:18:10 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:33:56.444 00:33:56.444 real 0m53.762s 00:33:56.444 user 1m47.711s 00:33:56.444 sys 0m6.416s 00:33:56.444 19:18:10 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:56.444 19:18:10 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:56.444 ************************************ 00:33:56.444 END TEST blockdev_crypto_sw 00:33:56.444 ************************************ 00:33:56.444 19:18:11 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:33:56.444 19:18:11 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:56.444 19:18:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:56.444 19:18:11 -- common/autotest_common.sh@10 -- # set +x 00:33:56.444 ************************************ 00:33:56.444 START TEST blockdev_crypto_qat 00:33:56.444 ************************************ 00:33:56.444 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:33:56.444 * Looking for test storage... 00:33:56.444 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1858026 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:33:56.445 19:18:11 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 1858026 00:33:56.445 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@830 -- # '[' -z 1858026 ']' 00:33:56.743 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.743 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:56.743 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.743 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:56.743 19:18:11 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:56.743 [2024-06-10 19:18:11.266487] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:33:56.743 [2024-06-10 19:18:11.266553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858026 ] 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.0 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.1 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.2 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.3 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.4 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.5 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.6 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:01.7 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.0 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.1 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.2 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.3 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.4 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.5 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.6 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b6:02.7 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.0 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.1 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.2 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.3 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.4 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.5 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.6 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:01.7 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.0 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.1 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.2 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.3 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.4 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.5 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.6 cannot be used 00:33:56.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.743 EAL: Requested device 0000:b8:02.7 cannot be used 00:33:56.743 [2024-06-10 19:18:11.391906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.743 [2024-06-10 19:18:11.486651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.713 19:18:12 blockdev_crypto_qat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:57.713 19:18:12 blockdev_crypto_qat -- common/autotest_common.sh@863 -- # return 0 00:33:57.713 19:18:12 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:57.713 19:18:12 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:33:57.713 19:18:12 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:33:57.713 19:18:12 blockdev_crypto_qat -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.713 19:18:12 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:57.713 [2024-06-10 19:18:12.160760] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:57.713 [2024-06-10 19:18:12.168798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:57.713 [2024-06-10 19:18:12.176811] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:57.713 [2024-06-10 19:18:12.244325] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:00.253 true 00:34:00.253 true 00:34:00.253 true 00:34:00.253 true 00:34:00.253 Malloc0 00:34:00.253 Malloc1 00:34:00.253 Malloc2 00:34:00.253 Malloc3 00:34:00.253 [2024-06-10 19:18:14.580539] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:00.253 crypto_ram 00:34:00.253 [2024-06-10 19:18:14.588560] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:00.253 crypto_ram1 00:34:00.253 [2024-06-10 19:18:14.596600] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:00.253 crypto_ram2 00:34:00.253 [2024-06-10 19:18:14.604610] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:00.253 crypto_ram3 00:34:00.253 [ 00:34:00.253 { 00:34:00.253 "name": "Malloc1", 00:34:00.253 "aliases": [ 00:34:00.253 "b7b3de72-e897-4721-9530-2565c1f3712d" 00:34:00.253 ], 00:34:00.253 "product_name": "Malloc disk", 00:34:00.253 "block_size": 512, 00:34:00.253 "num_blocks": 65536, 00:34:00.253 "uuid": "b7b3de72-e897-4721-9530-2565c1f3712d", 00:34:00.253 "assigned_rate_limits": { 00:34:00.253 "rw_ios_per_sec": 0, 00:34:00.253 "rw_mbytes_per_sec": 0, 00:34:00.253 "r_mbytes_per_sec": 0, 00:34:00.253 "w_mbytes_per_sec": 0 00:34:00.253 }, 00:34:00.253 "claimed": true, 00:34:00.253 "claim_type": "exclusive_write", 00:34:00.253 "zoned": false, 00:34:00.253 "supported_io_types": { 00:34:00.253 "read": true, 00:34:00.253 "write": true, 00:34:00.253 "unmap": true, 00:34:00.253 "write_zeroes": true, 00:34:00.253 "flush": true, 00:34:00.253 "reset": true, 00:34:00.253 "compare": false, 00:34:00.253 "compare_and_write": false, 00:34:00.253 "abort": true, 00:34:00.253 "nvme_admin": false, 00:34:00.253 "nvme_io": false 00:34:00.253 }, 00:34:00.253 "memory_domains": [ 00:34:00.253 { 00:34:00.253 "dma_device_id": "system", 00:34:00.253 "dma_device_type": 1 00:34:00.253 }, 00:34:00.253 { 00:34:00.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.253 "dma_device_type": 2 00:34:00.253 } 00:34:00.253 ], 00:34:00.253 "driver_specific": {} 00:34:00.253 } 00:34:00.253 ] 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:00.253 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.253 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:34:00.254 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "c68d6942-de53-5799-929d-94b7aad490ae"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c68d6942-de53-5799-929d-94b7aad490ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "9340dad9-ba67-5cde-be96-c8f90bf59fc6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9340dad9-ba67-5cde-be96-c8f90bf59fc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "fb8b9c32-5e62-5d9f-939e-8233a842c225"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "fb8b9c32-5e62-5d9f-939e-8233a842c225",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "0e823780-b823-5e46-930c-6ffdaeb14ec6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "0e823780-b823-5e46-930c-6ffdaeb14ec6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:34:00.254 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:34:00.254 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:34:00.254 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:34:00.254 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:34:00.254 19:18:14 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 1858026 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@949 -- # '[' -z 1858026 ']' 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # kill -0 1858026 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # uname 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1858026 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1858026' 00:34:00.254 killing process with pid 1858026 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # kill 1858026 00:34:00.254 19:18:14 blockdev_crypto_qat -- common/autotest_common.sh@973 -- # wait 1858026 00:34:00.820 19:18:15 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:00.820 19:18:15 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:34:00.820 19:18:15 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:34:00.820 19:18:15 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:00.820 19:18:15 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:00.820 ************************************ 00:34:00.820 START TEST bdev_hello_world 00:34:00.820 ************************************ 00:34:00.820 19:18:15 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:34:00.820 [2024-06-10 19:18:15.461306] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:34:00.820 [2024-06-10 19:18:15.461361] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858833 ] 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.820 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:00.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:00.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:00.821 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:01.078 [2024-06-10 19:18:15.592298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.078 [2024-06-10 19:18:15.675018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.078 [2024-06-10 19:18:15.696215] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:01.078 [2024-06-10 19:18:15.704241] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:01.078 [2024-06-10 19:18:15.712256] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:01.078 [2024-06-10 19:18:15.817197] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:03.606 [2024-06-10 19:18:18.005519] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:03.606 [2024-06-10 19:18:18.005593] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:03.606 [2024-06-10 19:18:18.005612] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:03.606 [2024-06-10 19:18:18.013538] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:03.606 [2024-06-10 19:18:18.013557] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:03.606 [2024-06-10 19:18:18.013568] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:03.606 [2024-06-10 19:18:18.021558] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:03.606 [2024-06-10 19:18:18.021581] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:03.606 [2024-06-10 19:18:18.021592] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:03.606 [2024-06-10 19:18:18.029586] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:03.606 [2024-06-10 19:18:18.029604] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:03.606 [2024-06-10 19:18:18.029615] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:03.606 [2024-06-10 19:18:18.101013] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:03.606 [2024-06-10 19:18:18.101053] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:34:03.606 [2024-06-10 19:18:18.101070] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:03.606 [2024-06-10 19:18:18.102273] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:03.606 [2024-06-10 19:18:18.102348] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:03.606 [2024-06-10 19:18:18.102364] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:03.606 [2024-06-10 19:18:18.102405] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:03.606 00:34:03.606 [2024-06-10 19:18:18.102423] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:03.865 00:34:03.865 real 0m3.023s 00:34:03.865 user 0m2.644s 00:34:03.865 sys 0m0.339s 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:03.865 ************************************ 00:34:03.865 END TEST bdev_hello_world 00:34:03.865 ************************************ 00:34:03.865 19:18:18 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:34:03.865 19:18:18 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:03.865 19:18:18 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:03.865 19:18:18 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:03.865 ************************************ 00:34:03.865 START TEST bdev_bounds 00:34:03.865 ************************************ 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1859378 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1859378' 00:34:03.865 Process bdevio pid: 1859378 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1859378 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 1859378 ']' 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:03.865 19:18:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:03.865 [2024-06-10 19:18:18.576806] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:34:03.865 [2024-06-10 19:18:18.576864] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859378 ] 00:34:04.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.123 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:04.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:04.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.124 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:04.124 [2024-06-10 19:18:18.711485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:04.124 [2024-06-10 19:18:18.800999] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.124 [2024-06-10 19:18:18.801096] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:04.124 [2024-06-10 19:18:18.801100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.124 [2024-06-10 19:18:18.822340] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:04.124 [2024-06-10 19:18:18.830373] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:04.124 [2024-06-10 19:18:18.838387] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:04.381 [2024-06-10 19:18:18.935283] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:06.911 [2024-06-10 19:18:21.116966] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:06.911 [2024-06-10 19:18:21.117038] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:06.911 [2024-06-10 19:18:21.117054] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.911 [2024-06-10 19:18:21.124984] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:06.911 [2024-06-10 19:18:21.125003] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:06.911 [2024-06-10 19:18:21.125014] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.911 [2024-06-10 19:18:21.133008] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:06.911 [2024-06-10 19:18:21.133030] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:06.911 [2024-06-10 19:18:21.133043] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.911 [2024-06-10 19:18:21.141029] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:06.911 [2024-06-10 19:18:21.141047] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:06.911 [2024-06-10 19:18:21.141058] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.911 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:06.911 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:34:06.911 19:18:21 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:06.911 I/O targets: 00:34:06.911 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:34:06.911 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:34:06.911 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:34:06.911 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:34:06.911 00:34:06.911 00:34:06.911 CUnit - A unit testing framework for C - Version 2.1-3 00:34:06.911 http://cunit.sourceforge.net/ 00:34:06.911 00:34:06.911 00:34:06.911 Suite: bdevio tests on: crypto_ram3 00:34:06.911 Test: blockdev write read block ...passed 00:34:06.911 Test: blockdev write zeroes read block ...passed 00:34:06.911 Test: blockdev write zeroes read no split ...passed 00:34:06.911 Test: blockdev write zeroes read split ...passed 00:34:06.911 Test: blockdev write zeroes read split partial ...passed 00:34:06.911 Test: blockdev reset ...passed 00:34:06.911 Test: blockdev write read 8 blocks ...passed 00:34:06.911 Test: blockdev write read size > 128k ...passed 00:34:06.911 Test: blockdev write read invalid size ...passed 00:34:06.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:06.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:06.911 Test: blockdev write read max offset ...passed 00:34:06.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:06.911 Test: blockdev writev readv 8 blocks ...passed 00:34:06.911 Test: blockdev writev readv 30 x 1block ...passed 00:34:06.911 Test: blockdev writev readv block ...passed 00:34:06.911 Test: blockdev writev readv size > 128k ...passed 00:34:06.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:06.911 Test: blockdev comparev and writev ...passed 00:34:06.911 Test: blockdev nvme passthru rw ...passed 00:34:06.911 Test: blockdev nvme passthru vendor specific ...passed 00:34:06.911 Test: blockdev nvme admin passthru ...passed 00:34:06.911 Test: blockdev copy ...passed 00:34:06.911 Suite: bdevio tests on: crypto_ram2 00:34:06.911 Test: blockdev write read block ...passed 00:34:06.911 Test: blockdev write zeroes read block ...passed 00:34:06.911 Test: blockdev write zeroes read no split ...passed 00:34:06.911 Test: blockdev write zeroes read split ...passed 00:34:06.911 Test: blockdev write zeroes read split partial ...passed 00:34:06.911 Test: blockdev reset ...passed 00:34:06.911 Test: blockdev write read 8 blocks ...passed 00:34:06.911 Test: blockdev write read size > 128k ...passed 00:34:06.911 Test: blockdev write read invalid size ...passed 00:34:06.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:06.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:06.911 Test: blockdev write read max offset ...passed 00:34:06.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:06.911 Test: blockdev writev readv 8 blocks ...passed 00:34:06.911 Test: blockdev writev readv 30 x 1block ...passed 00:34:06.911 Test: blockdev writev readv block ...passed 00:34:06.911 Test: blockdev writev readv size > 128k ...passed 00:34:06.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:06.911 Test: blockdev comparev and writev ...passed 00:34:06.911 Test: blockdev nvme passthru rw ...passed 00:34:06.911 Test: blockdev nvme passthru vendor specific ...passed 00:34:06.911 Test: blockdev nvme admin passthru ...passed 00:34:06.911 Test: blockdev copy ...passed 00:34:06.911 Suite: bdevio tests on: crypto_ram1 00:34:06.911 Test: blockdev write read block ...passed 00:34:06.911 Test: blockdev write zeroes read block ...passed 00:34:06.912 Test: blockdev write zeroes read no split ...passed 00:34:06.912 Test: blockdev write zeroes read split ...passed 00:34:06.912 Test: blockdev write zeroes read split partial ...passed 00:34:06.912 Test: blockdev reset ...passed 00:34:06.912 Test: blockdev write read 8 blocks ...passed 00:34:06.912 Test: blockdev write read size > 128k ...passed 00:34:06.912 Test: blockdev write read invalid size ...passed 00:34:06.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:06.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:06.912 Test: blockdev write read max offset ...passed 00:34:06.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:06.912 Test: blockdev writev readv 8 blocks ...passed 00:34:06.912 Test: blockdev writev readv 30 x 1block ...passed 00:34:06.912 Test: blockdev writev readv block ...passed 00:34:06.912 Test: blockdev writev readv size > 128k ...passed 00:34:06.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:06.912 Test: blockdev comparev and writev ...passed 00:34:06.912 Test: blockdev nvme passthru rw ...passed 00:34:06.912 Test: blockdev nvme passthru vendor specific ...passed 00:34:06.912 Test: blockdev nvme admin passthru ...passed 00:34:06.912 Test: blockdev copy ...passed 00:34:06.912 Suite: bdevio tests on: crypto_ram 00:34:06.912 Test: blockdev write read block ...passed 00:34:06.912 Test: blockdev write zeroes read block ...passed 00:34:06.912 Test: blockdev write zeroes read no split ...passed 00:34:06.912 Test: blockdev write zeroes read split ...passed 00:34:06.912 Test: blockdev write zeroes read split partial ...passed 00:34:06.912 Test: blockdev reset ...passed 00:34:06.912 Test: blockdev write read 8 blocks ...passed 00:34:06.912 Test: blockdev write read size > 128k ...passed 00:34:06.912 Test: blockdev write read invalid size ...passed 00:34:06.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:06.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:06.912 Test: blockdev write read max offset ...passed 00:34:06.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:06.912 Test: blockdev writev readv 8 blocks ...passed 00:34:06.912 Test: blockdev writev readv 30 x 1block ...passed 00:34:06.912 Test: blockdev writev readv block ...passed 00:34:06.912 Test: blockdev writev readv size > 128k ...passed 00:34:06.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:06.912 Test: blockdev comparev and writev ...passed 00:34:06.912 Test: blockdev nvme passthru rw ...passed 00:34:06.912 Test: blockdev nvme passthru vendor specific ...passed 00:34:06.912 Test: blockdev nvme admin passthru ...passed 00:34:06.912 Test: blockdev copy ...passed 00:34:06.912 00:34:06.912 Run Summary: Type Total Ran Passed Failed Inactive 00:34:06.912 suites 4 4 n/a 0 0 00:34:06.912 tests 92 92 92 0 0 00:34:06.912 asserts 520 520 520 0 n/a 00:34:06.912 00:34:06.912 Elapsed time = 0.501 seconds 00:34:06.912 0 00:34:06.912 19:18:21 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1859378 00:34:06.912 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 1859378 ']' 00:34:06.912 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 1859378 00:34:06.912 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:34:06.912 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:06.912 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1859378 00:34:07.170 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:07.170 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:07.170 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1859378' 00:34:07.170 killing process with pid 1859378 00:34:07.170 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # kill 1859378 00:34:07.170 19:18:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@973 -- # wait 1859378 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:34:07.429 00:34:07.429 real 0m3.493s 00:34:07.429 user 0m9.778s 00:34:07.429 sys 0m0.526s 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:07.429 ************************************ 00:34:07.429 END TEST bdev_bounds 00:34:07.429 ************************************ 00:34:07.429 19:18:22 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:34:07.429 19:18:22 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:34:07.429 19:18:22 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:07.429 19:18:22 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:07.429 ************************************ 00:34:07.429 START TEST bdev_nbd 00:34:07.429 ************************************ 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1859950 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1859950 /var/tmp/spdk-nbd.sock 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@830 -- # '[' -z 1859950 ']' 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:07.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:07.429 19:18:22 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:07.429 [2024-06-10 19:18:22.159706] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:34:07.429 [2024-06-10 19:18:22.159761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.687 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:07.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:07.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:07.688 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:07.688 [2024-06-10 19:18:22.293155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.688 [2024-06-10 19:18:22.380039] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.688 [2024-06-10 19:18:22.401235] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:07.688 [2024-06-10 19:18:22.409261] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:07.688 [2024-06-10 19:18:22.417276] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:07.946 [2024-06-10 19:18:22.526226] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:10.473 [2024-06-10 19:18:24.708470] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:10.473 [2024-06-10 19:18:24.708532] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:10.473 [2024-06-10 19:18:24.708546] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:10.473 [2024-06-10 19:18:24.716492] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:10.473 [2024-06-10 19:18:24.716512] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:10.473 [2024-06-10 19:18:24.716523] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:10.473 [2024-06-10 19:18:24.724510] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:10.473 [2024-06-10 19:18:24.724527] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:10.473 [2024-06-10 19:18:24.724537] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:10.473 [2024-06-10 19:18:24.732530] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:10.473 [2024-06-10 19:18:24.732546] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:10.473 [2024-06-10 19:18:24.732557] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@863 -- # return 0 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:10.473 19:18:24 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:10.473 1+0 records in 00:34:10.473 1+0 records out 00:34:10.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029357 s, 14.0 MB/s 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:10.473 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:10.474 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:10.732 1+0 records in 00:34:10.732 1+0 records out 00:34:10.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258671 s, 15.8 MB/s 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:10.732 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd2 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd2 /proc/partitions 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:10.989 1+0 records in 00:34:10.989 1+0 records out 00:34:10.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308157 s, 13.3 MB/s 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:10.989 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd3 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd3 /proc/partitions 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:11.247 1+0 records in 00:34:11.247 1+0 records out 00:34:11.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352713 s, 11.6 MB/s 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:34:11.247 19:18:25 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:11.504 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd0", 00:34:11.504 "bdev_name": "crypto_ram" 00:34:11.504 }, 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd1", 00:34:11.504 "bdev_name": "crypto_ram1" 00:34:11.504 }, 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd2", 00:34:11.504 "bdev_name": "crypto_ram2" 00:34:11.504 }, 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd3", 00:34:11.504 "bdev_name": "crypto_ram3" 00:34:11.504 } 00:34:11.504 ]' 00:34:11.504 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:11.504 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd0", 00:34:11.504 "bdev_name": "crypto_ram" 00:34:11.504 }, 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd1", 00:34:11.504 "bdev_name": "crypto_ram1" 00:34:11.504 }, 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd2", 00:34:11.504 "bdev_name": "crypto_ram2" 00:34:11.504 }, 00:34:11.504 { 00:34:11.504 "nbd_device": "/dev/nbd3", 00:34:11.504 "bdev_name": "crypto_ram3" 00:34:11.504 } 00:34:11.504 ]' 00:34:11.504 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:11.504 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:34:11.505 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.505 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:34:11.505 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:11.505 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:11.505 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:11.505 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:11.762 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.020 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.278 19:18:26 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.536 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.816 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:12.817 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:34:13.074 /dev/nbd0 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.074 1+0 records in 00:34:13.074 1+0 records out 00:34:13.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311952 s, 13.1 MB/s 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:13.074 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:34:13.332 /dev/nbd1 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.332 1+0 records in 00:34:13.332 1+0 records out 00:34:13.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308285 s, 13.3 MB/s 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:13.332 19:18:27 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:34:13.590 /dev/nbd10 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd10 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd10 /proc/partitions 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:13.590 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.591 1+0 records in 00:34:13.591 1+0 records out 00:34:13.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301101 s, 13.6 MB/s 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:13.591 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:34:13.849 /dev/nbd11 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local nbd_name=nbd11 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local i 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # grep -q -w nbd11 /proc/partitions 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # break 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.849 1+0 records in 00:34:13.849 1+0 records out 00:34:13.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002994 s, 13.7 MB/s 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # size=4096 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # return 0 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:13.849 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd0", 00:34:14.107 "bdev_name": "crypto_ram" 00:34:14.107 }, 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd1", 00:34:14.107 "bdev_name": "crypto_ram1" 00:34:14.107 }, 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd10", 00:34:14.107 "bdev_name": "crypto_ram2" 00:34:14.107 }, 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd11", 00:34:14.107 "bdev_name": "crypto_ram3" 00:34:14.107 } 00:34:14.107 ]' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd0", 00:34:14.107 "bdev_name": "crypto_ram" 00:34:14.107 }, 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd1", 00:34:14.107 "bdev_name": "crypto_ram1" 00:34:14.107 }, 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd10", 00:34:14.107 "bdev_name": "crypto_ram2" 00:34:14.107 }, 00:34:14.107 { 00:34:14.107 "nbd_device": "/dev/nbd11", 00:34:14.107 "bdev_name": "crypto_ram3" 00:34:14.107 } 00:34:14.107 ]' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:14.107 /dev/nbd1 00:34:14.107 /dev/nbd10 00:34:14.107 /dev/nbd11' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:14.107 /dev/nbd1 00:34:14.107 /dev/nbd10 00:34:14.107 /dev/nbd11' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:14.107 256+0 records in 00:34:14.107 256+0 records out 00:34:14.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105709 s, 99.2 MB/s 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:14.107 256+0 records in 00:34:14.107 256+0 records out 00:34:14.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0764729 s, 13.7 MB/s 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:14.107 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:14.366 256+0 records in 00:34:14.366 256+0 records out 00:34:14.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0501381 s, 20.9 MB/s 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:34:14.366 256+0 records in 00:34:14.366 256+0 records out 00:34:14.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0527341 s, 19.9 MB/s 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:34:14.366 256+0 records in 00:34:14.366 256+0 records out 00:34:14.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307132 s, 34.1 MB/s 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.366 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.624 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.882 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:15.140 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:15.397 19:18:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:15.397 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:15.397 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:15.654 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:15.655 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:15.911 malloc_lvol_verify 00:34:15.911 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:16.169 545153c9-2400-4bfc-b336-25c846121575 00:34:16.169 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:16.426 87ce6b16-cf41-4eec-946c-fa239d3e20d0 00:34:16.426 19:18:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:16.684 /dev/nbd0 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:16.684 mke2fs 1.46.5 (30-Dec-2021) 00:34:16.684 Discarding device blocks: 0/4096 done 00:34:16.684 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:16.684 00:34:16.684 Allocating group tables: 0/1 done 00:34:16.684 Writing inode tables: 0/1 done 00:34:16.684 Creating journal (1024 blocks): done 00:34:16.684 Writing superblocks and filesystem accounting information: 0/1 done 00:34:16.684 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:16.684 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1859950 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@949 -- # '[' -z 1859950 ']' 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # kill -0 1859950 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # uname 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1859950 00:34:16.941 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:16.942 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:16.942 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1859950' 00:34:16.942 killing process with pid 1859950 00:34:16.942 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # kill 1859950 00:34:16.942 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@973 -- # wait 1859950 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:34:17.200 00:34:17.200 real 0m9.764s 00:34:17.200 user 0m12.739s 00:34:17.200 sys 0m3.807s 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:17.200 ************************************ 00:34:17.200 END TEST bdev_nbd 00:34:17.200 ************************************ 00:34:17.200 19:18:31 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:34:17.200 19:18:31 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:34:17.200 19:18:31 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:34:17.200 19:18:31 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:34:17.200 19:18:31 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:17.200 19:18:31 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:17.200 19:18:31 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:17.200 ************************************ 00:34:17.200 START TEST bdev_fio 00:34:17.200 ************************************ 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:17.200 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:17.200 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:34:17.458 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:34:17.458 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:34:17.458 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:34:17.458 19:18:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:17.458 19:18:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:17.458 ************************************ 00:34:17.458 START TEST bdev_fio_rw_verify 00:34:17.458 ************************************ 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:17.459 19:18:32 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:18.023 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:18.023 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:18.023 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:18.024 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:18.024 fio-3.35 00:34:18.024 Starting 4 threads 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:18.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:18.024 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:32.967 00:34:32.967 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1862407: Mon Jun 10 19:18:45 2024 00:34:32.967 read: IOPS=24.2k, BW=94.3MiB/s (98.9MB/s)(944MiB/10001msec) 00:34:32.967 slat (usec): min=15, max=433, avg=57.04, stdev=32.10 00:34:32.967 clat (usec): min=23, max=1497, avg=318.75, stdev=195.62 00:34:32.967 lat (usec): min=67, max=1630, avg=375.79, stdev=210.36 00:34:32.967 clat percentiles (usec): 00:34:32.967 | 50.000th=[ 265], 99.000th=[ 922], 99.900th=[ 1106], 99.990th=[ 1237], 00:34:32.967 | 99.999th=[ 1352] 00:34:32.967 write: IOPS=26.6k, BW=104MiB/s (109MB/s)(1012MiB/9738msec); 0 zone resets 00:34:32.967 slat (usec): min=22, max=1692, avg=67.47, stdev=31.43 00:34:32.967 clat (usec): min=23, max=2267, avg=356.56, stdev=204.68 00:34:32.967 lat (usec): min=69, max=2403, avg=424.03, stdev=218.58 00:34:32.967 clat percentiles (usec): 00:34:32.967 | 50.000th=[ 314], 99.000th=[ 996], 99.900th=[ 1156], 99.990th=[ 1450], 00:34:32.967 | 99.999th=[ 2114] 00:34:32.967 bw ( KiB/s): min=83448, max=149456, per=97.99%, avg=104318.32, stdev=4663.95, samples=76 00:34:32.967 iops : min=20862, max=37364, avg=26079.58, stdev=1165.99, samples=76 00:34:32.967 lat (usec) : 50=0.01%, 100=4.07%, 250=37.06%, 500=39.31%, 750=14.65% 00:34:32.967 lat (usec) : 1000=4.21% 00:34:32.967 lat (msec) : 2=0.69%, 4=0.01% 00:34:32.967 cpu : usr=99.66%, sys=0.00%, ctx=118, majf=0, minf=283 00:34:32.967 IO depths : 1=3.1%, 2=27.7%, 4=55.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.967 complete : 0=0.0%, 4=87.8%, 8=12.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.967 issued rwts: total=241539,259183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.967 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:32.967 00:34:32.967 Run status group 0 (all jobs): 00:34:32.967 READ: bw=94.3MiB/s (98.9MB/s), 94.3MiB/s-94.3MiB/s (98.9MB/s-98.9MB/s), io=944MiB (989MB), run=10001-10001msec 00:34:32.967 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1012MiB (1062MB), run=9738-9738msec 00:34:32.967 00:34:32.967 real 0m13.510s 00:34:32.967 user 0m53.494s 00:34:32.967 sys 0m0.520s 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:32.967 ************************************ 00:34:32.967 END TEST bdev_fio_rw_verify 00:34:32.967 ************************************ 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:34:32.967 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1298 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "c68d6942-de53-5799-929d-94b7aad490ae"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c68d6942-de53-5799-929d-94b7aad490ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "9340dad9-ba67-5cde-be96-c8f90bf59fc6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9340dad9-ba67-5cde-be96-c8f90bf59fc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "fb8b9c32-5e62-5d9f-939e-8233a842c225"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "fb8b9c32-5e62-5d9f-939e-8233a842c225",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "0e823780-b823-5e46-930c-6ffdaeb14ec6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "0e823780-b823-5e46-930c-6ffdaeb14ec6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:34:32.968 crypto_ram1 00:34:32.968 crypto_ram2 00:34:32.968 crypto_ram3 ]] 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "c68d6942-de53-5799-929d-94b7aad490ae"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c68d6942-de53-5799-929d-94b7aad490ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "9340dad9-ba67-5cde-be96-c8f90bf59fc6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9340dad9-ba67-5cde-be96-c8f90bf59fc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "fb8b9c32-5e62-5d9f-939e-8233a842c225"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "fb8b9c32-5e62-5d9f-939e-8233a842c225",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "0e823780-b823-5e46-930c-6ffdaeb14ec6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "0e823780-b823-5e46-930c-6ffdaeb14ec6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:32.968 ************************************ 00:34:32.968 START TEST bdev_fio_trim 00:34:32.968 ************************************ 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:32.968 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local sanitizers 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # shift 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # local asan_lib= 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libasan 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:32.969 19:18:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:34:32.969 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:32.969 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:32.969 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:32.969 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:32.969 fio-3.35 00:34:32.969 Starting 4 threads 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:32.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.969 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:45.167 00:34:45.167 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1864823: Mon Jun 10 19:18:58 2024 00:34:45.167 write: IOPS=38.6k, BW=151MiB/s (158MB/s)(1508MiB/10001msec); 0 zone resets 00:34:45.167 slat (usec): min=16, max=455, avg=60.08, stdev=35.32 00:34:45.167 clat (usec): min=17, max=1439, avg=221.09, stdev=125.36 00:34:45.167 lat (usec): min=46, max=1480, avg=281.16, stdev=144.94 00:34:45.167 clat percentiles (usec): 00:34:45.167 | 50.000th=[ 194], 99.000th=[ 619], 99.900th=[ 717], 99.990th=[ 799], 00:34:45.167 | 99.999th=[ 1287] 00:34:45.167 bw ( KiB/s): min=144768, max=219108, per=100.00%, avg=154844.84, stdev=3984.77, samples=76 00:34:45.167 iops : min=36192, max=54777, avg=38711.11, stdev=996.20, samples=76 00:34:45.167 trim: IOPS=38.6k, BW=151MiB/s (158MB/s)(1508MiB/10001msec); 0 zone resets 00:34:45.167 slat (usec): min=5, max=241, avg=16.18, stdev= 6.43 00:34:45.167 clat (usec): min=46, max=1480, avg=281.33, stdev=144.95 00:34:45.167 lat (usec): min=56, max=1490, avg=297.51, stdev=146.85 00:34:45.167 clat percentiles (usec): 00:34:45.167 | 50.000th=[ 249], 99.000th=[ 742], 99.900th=[ 840], 99.990th=[ 947], 00:34:45.167 | 99.999th=[ 1352] 00:34:45.167 bw ( KiB/s): min=144768, max=219108, per=100.00%, avg=154844.42, stdev=3984.81, samples=76 00:34:45.167 iops : min=36192, max=54777, avg=38711.21, stdev=996.19, samples=76 00:34:45.167 lat (usec) : 20=0.01%, 50=0.67%, 100=8.54%, 250=49.83%, 500=34.15% 00:34:45.167 lat (usec) : 750=6.36%, 1000=0.43% 00:34:45.167 lat (msec) : 2=0.01% 00:34:45.167 cpu : usr=99.62%, sys=0.00%, ctx=105, majf=0, minf=103 00:34:45.167 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.167 issued rwts: total=0,386068,386068,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.167 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:45.167 00:34:45.167 Run status group 0 (all jobs): 00:34:45.167 WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=1508MiB (1581MB), run=10001-10001msec 00:34:45.167 TRIM: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=1508MiB (1581MB), run=10001-10001msec 00:34:45.167 00:34:45.167 real 0m13.497s 00:34:45.167 user 0m54.105s 00:34:45.167 sys 0m0.508s 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:34:45.167 ************************************ 00:34:45.167 END TEST bdev_fio_trim 00:34:45.167 ************************************ 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:34:45.167 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:34:45.167 00:34:45.167 real 0m27.363s 00:34:45.167 user 1m47.780s 00:34:45.167 sys 0m1.224s 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:45.167 ************************************ 00:34:45.167 END TEST bdev_fio 00:34:45.167 ************************************ 00:34:45.167 19:18:59 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:45.167 19:18:59 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:45.167 19:18:59 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:34:45.167 19:18:59 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:45.167 19:18:59 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:45.167 ************************************ 00:34:45.167 START TEST bdev_verify 00:34:45.167 ************************************ 00:34:45.167 19:18:59 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:45.167 [2024-06-10 19:18:59.434992] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:34:45.167 [2024-06-10 19:18:59.435044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866564 ] 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:45.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:45.167 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:45.167 [2024-06-10 19:18:59.569377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.168 [2024-06-10 19:18:59.653588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.168 [2024-06-10 19:18:59.653599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.168 [2024-06-10 19:18:59.674881] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:45.168 [2024-06-10 19:18:59.682916] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:45.168 [2024-06-10 19:18:59.690932] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:45.168 [2024-06-10 19:18:59.789433] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:47.688 [2024-06-10 19:19:01.970459] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:47.688 [2024-06-10 19:19:01.970541] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:47.688 [2024-06-10 19:19:01.970555] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:47.688 [2024-06-10 19:19:01.978478] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:47.688 [2024-06-10 19:19:01.978495] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:47.688 [2024-06-10 19:19:01.978506] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:47.688 [2024-06-10 19:19:01.986498] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:47.688 [2024-06-10 19:19:01.986514] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:47.688 [2024-06-10 19:19:01.986524] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:47.688 [2024-06-10 19:19:01.994520] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:47.688 [2024-06-10 19:19:01.994535] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:47.688 [2024-06-10 19:19:01.994545] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:47.688 Running I/O for 5 seconds... 00:34:52.941 00:34:52.941 Latency(us) 00:34:52.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.941 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:52.941 Verification LBA range: start 0x0 length 0x1000 00:34:52.941 crypto_ram : 5.07 518.05 2.02 0.00 0.00 245816.95 2686.98 167772.16 00:34:52.941 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:52.941 Verification LBA range: start 0x1000 length 0x1000 00:34:52.941 crypto_ram : 5.06 516.70 2.02 0.00 0.00 246490.67 2883.58 167772.16 00:34:52.941 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:52.941 Verification LBA range: start 0x0 length 0x1000 00:34:52.941 crypto_ram1 : 5.07 520.94 2.03 0.00 0.00 244043.33 3774.87 159383.55 00:34:52.941 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:52.941 Verification LBA range: start 0x1000 length 0x1000 00:34:52.941 crypto_ram1 : 5.06 519.68 2.03 0.00 0.00 244687.66 2857.37 159383.55 00:34:52.941 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:52.941 Verification LBA range: start 0x0 length 0x1000 00:34:52.941 crypto_ram2 : 5.05 4055.22 15.84 0.00 0.00 31310.12 4744.81 26109.54 00:34:52.942 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:52.942 Verification LBA range: start 0x1000 length 0x1000 00:34:52.942 crypto_ram2 : 5.04 4038.62 15.78 0.00 0.00 31417.79 6527.39 26214.40 00:34:52.942 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:52.942 Verification LBA range: start 0x0 length 0x1000 00:34:52.942 crypto_ram3 : 5.05 4053.06 15.83 0.00 0.00 31228.99 5295.31 26004.68 00:34:52.942 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:52.942 Verification LBA range: start 0x1000 length 0x1000 00:34:52.942 crypto_ram3 : 5.05 4054.91 15.84 0.00 0.00 31227.70 2136.47 26109.54 00:34:52.942 =================================================================================================================== 00:34:52.942 Total : 18277.18 71.40 0.00 0.00 55662.75 2136.47 167772.16 00:34:52.942 00:34:52.942 real 0m8.140s 00:34:52.942 user 0m15.473s 00:34:52.942 sys 0m0.357s 00:34:52.942 19:19:07 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:52.942 19:19:07 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:52.942 ************************************ 00:34:52.942 END TEST bdev_verify 00:34:52.942 ************************************ 00:34:52.942 19:19:07 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:52.942 19:19:07 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:34:52.942 19:19:07 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:52.942 19:19:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:52.942 ************************************ 00:34:52.942 START TEST bdev_verify_big_io 00:34:52.942 ************************************ 00:34:52.942 19:19:07 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:52.942 [2024-06-10 19:19:07.652185] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:34:52.942 [2024-06-10 19:19:07.652245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867903 ] 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.0 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.1 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.2 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.3 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.4 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.5 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.6 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:01.7 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.0 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.1 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.2 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.3 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.4 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.5 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.6 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b6:02.7 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.0 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.1 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.2 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.3 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.4 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.5 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.6 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:01.7 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.0 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.1 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.2 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.3 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.4 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.5 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.6 cannot be used 00:34:53.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:53.199 EAL: Requested device 0000:b8:02.7 cannot be used 00:34:53.199 [2024-06-10 19:19:07.784539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:53.199 [2024-06-10 19:19:07.868109] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.199 [2024-06-10 19:19:07.868114] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.199 [2024-06-10 19:19:07.889501] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:53.199 [2024-06-10 19:19:07.897533] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:53.199 [2024-06-10 19:19:07.905547] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:53.456 [2024-06-10 19:19:08.007233] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:55.975 [2024-06-10 19:19:10.187492] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:55.976 [2024-06-10 19:19:10.187567] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:55.976 [2024-06-10 19:19:10.187586] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.976 [2024-06-10 19:19:10.195512] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:55.976 [2024-06-10 19:19:10.195529] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:55.976 [2024-06-10 19:19:10.195540] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.976 [2024-06-10 19:19:10.203534] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:55.976 [2024-06-10 19:19:10.203550] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:55.976 [2024-06-10 19:19:10.203561] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.976 [2024-06-10 19:19:10.211557] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:55.976 [2024-06-10 19:19:10.211572] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:55.976 [2024-06-10 19:19:10.211587] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:55.976 Running I/O for 5 seconds... 00:34:56.542 [2024-06-10 19:19:11.068446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.068850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.069194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.069526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.069594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.069648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.069686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.069738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.070093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.070112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.070126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.070140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.073598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.073644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.073683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.542 [2024-06-10 19:19:11.073723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.074635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.078735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.079107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.079123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.079137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.079151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.082963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.083001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.083428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.083446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.083460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.083476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.086483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.086536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.086574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.086621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.087570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.090585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.090630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.090672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.090710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.091723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.094683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.094725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.094765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.094804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.095793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.098899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.098944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.098982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.099024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.099459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.099504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.099544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.099588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.100022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.100042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.100059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.100074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.103724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.104117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.104135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.543 [2024-06-10 19:19:11.104149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.104164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.107886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.108277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.108300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.108316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.108331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.111949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.112345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.112363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.112377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.112392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.115991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.116360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.116375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.116389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.116404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.119991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.120367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.120383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.120400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.120415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.123467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.123512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.123553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.123606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.123957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.124451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.127392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.127458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.127506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.127561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.127964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.128490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.131654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.131726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.131777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.131827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.132825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.135522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.135570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.135616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.135663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.136132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.544 [2024-06-10 19:19:11.136173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.136212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.136252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.136656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.136673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.136688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.136702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.139446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.139489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.139527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.139564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.140571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.143509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.143553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.143601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.143639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.144670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.147443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.147487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.147526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.147564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.147950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.147991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.148029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.148066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.148459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.148476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.148490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.148505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.151973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.152376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.152392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.152407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.152422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.155751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.156123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.156139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.156153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.156166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.158854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.158897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.158936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.158975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.159918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.162628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.162673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.162713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.162752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.163719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.166355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.166401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.545 [2024-06-10 19:19:11.166454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.166493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.166895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.166946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.166988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.167050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.167450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.167466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.167480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.167496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.170789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.171267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.171284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.171298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.171313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.173920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.173963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.174993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.175007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.177572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.177638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.177677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.177715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.178725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.181997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.183819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.183862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.183901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.183938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.184907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.186978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.187887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.189582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.189626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.189665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.189698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.190159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.190201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.190240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.546 [2024-06-10 19:19:11.190278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.190686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.190702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.190716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.190730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.194403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.195615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.197059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.198397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.200181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.200844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.201203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.201560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.202014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.202031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.202045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.202060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.205421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.206496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.207784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.209311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.210610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.210971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.211327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.211691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.212107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.212124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.212139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.212154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.214456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.215744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.217245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.218776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.219396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.219763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.220117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.220473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.220836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.220852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.220866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.220879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.224196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.225817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.227410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.228860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.229612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.229972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.230327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.230907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.231160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.231181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.231195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.231209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.234103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.235623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.237131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.237497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.238272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.238640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.238998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.240545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.240806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.240822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.240836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.240849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.244021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.245540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.246283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.246648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.247402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.247768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.249029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.250304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.250556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.250572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.547 [2024-06-10 19:19:11.250592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.250606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.253794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.255045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.255405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.255771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.256524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.257396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.258683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.260163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.260416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.260432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.260445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.260459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.263646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.264017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.264377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.264740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.265486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.266913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.268438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.269969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.270220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.270236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.270252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.270267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.272463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.272832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.273189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.273545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.275497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.276958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.278526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.280192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.280539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.280555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.280573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.280593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.282545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.282910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.283266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.283629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.285162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.286680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.288205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.288914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.289166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.289182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.289197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.289211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.291299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.291665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.292024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.293300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.548 [2024-06-10 19:19:11.295153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.296713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.297418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.298700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.298957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.298973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.298986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.299000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.301355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.301723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.303100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.304384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.306167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.306928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.308223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.309760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.310016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.310033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.310046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.310060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.312682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.314105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.315417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.316928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.317940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.319388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.320948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.322448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.322709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.322726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.322739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.322753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.326267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.327552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.329080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.330603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.332643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.334282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.335845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.337262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.337648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.337665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.337679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.337698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.341291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.342821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.344339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.345019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.346702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.348278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.349901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.350256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.350677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.350693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.350711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.350726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.354327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.355859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.356635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.358085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.359822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.361325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.361695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.362056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.362443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.362460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.362473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.362487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.365870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.367011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.368496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.369822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.371604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.372332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.372703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.373061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.373523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.808 [2024-06-10 19:19:11.373540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.373554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.373569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.376851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.377915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.379194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.380709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.382084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.382443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.382802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.383158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.383581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.383599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.383616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.383631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.385806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.387097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.388626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.390153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.390793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.391151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.391506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.391867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.392140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.392156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.392170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.392183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.395297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.396897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.398424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.399746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.400519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.400882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.401239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.402468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.402769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.402786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.402799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.402812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.405788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.407380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.409000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.409356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.410106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.410464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.411341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.412630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.412884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.412900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.412913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.412927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.415971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.417503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.417899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.418258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.418995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.419594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.420867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.422405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.422668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.422684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.422697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.422711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.425817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.426403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.426765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.427123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.427915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.429294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.430821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.432338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.432597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.432614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.432628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.432642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.435149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.435513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.435883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.436242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.438279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.439803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.441448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.443004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.443379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.443395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.443408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.443422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.445259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.445633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.445992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.446350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.447867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.449381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.809 [2024-06-10 19:19:11.450900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.451612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.451891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.451908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.451922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.451936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.453889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.454249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.454616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.456178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.457998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.459520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.460153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.461480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.461742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.461758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.461772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.461785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.463940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.464300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.465297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.466570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.468358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.469510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.470992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.472350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.472616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.472632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.472645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.472659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.474923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.475723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.477015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.478449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.479459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.480743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.482203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.483240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.483606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.483623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.483637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.483651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.486225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.486596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.486961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.487317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.488067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.488444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.488811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.489168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.489565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.489591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.489606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.489621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.492083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.492443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.492810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.493178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.493953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.494313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.494677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.495039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.495373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.495389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.495404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.495418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.498168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.498542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.498908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.498951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.499764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.500129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.500493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.500879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.501284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.501301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.501316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.501333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.503951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.504312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.504676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.505037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.505091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.505484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.505883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.506243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.506605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.506971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.507322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.507338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.507353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.507367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.509585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.810 [2024-06-10 19:19:11.509628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.509666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.509708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.510781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.513702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.514058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.514074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.514088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.514102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.516464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.516520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.516567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.516612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.516975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.517620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.519837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.519890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.519937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.519989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.520399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.520454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.520493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.520531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.520569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.521005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.521022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.521037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.521052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.523927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.524287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.524303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.524317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.524331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.526449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.526491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.526528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.526567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.526997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.527671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.529829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.529872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.529911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.529950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.530946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.533155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.533198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.533242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.533280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.811 [2024-06-10 19:19:11.533731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.533781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.533820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.533859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.533897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.534265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.534281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.534295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.534309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.536482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.536524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.536563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.536609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.537656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.540655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.541011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.541028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.541042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.541056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.543747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.543801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.543855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.543907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.544962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.547885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.548315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.548332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.548346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.548363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.550522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.550566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.550611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.550649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.551706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.553992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.554033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.812 [2024-06-10 19:19:11.554071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.554108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.554515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.554563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.554608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.554647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.554685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.555142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.555163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.555178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.555196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.557967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.558374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.558391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.558406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.558420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.560591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.560662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.560707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.560746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:56.813 [2024-06-10 19:19:11.561762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.564773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.565155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.565171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.565185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.565198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.567376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.567420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.567458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.567498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.567863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.567923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.073 [2024-06-10 19:19:11.568499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.570863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.570918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.570956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.570994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.571353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.571416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.571460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.571498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.571559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.572041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.572069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.572084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.572100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.574753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.575153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.575169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.575183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.575197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.577911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.577954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.577992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.578775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.580845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.581095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.581112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.581126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.581140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.583970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.584216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.584232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.584245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.584259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.585747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.585789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.585835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.585872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.586585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.588829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.074 [2024-06-10 19:19:11.588883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.588922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.588961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.589684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.591788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.592032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.592048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.592062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.592075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.594923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.595232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.595247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.595260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.595274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.596757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.596806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.596848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.596891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.597613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.599726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.599769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.599807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.599847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.600750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.602813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.603061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.603077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.603090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.603104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.075 [2024-06-10 19:19:11.605942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.605986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.606231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.606246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.606260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.606273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.607813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.607859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.607897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.609413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.609671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.609726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.609765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.609802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.609839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.610137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.610153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.610167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.610182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.614387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.616023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.617593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.619006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.619297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.620582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.622101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.623632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.624458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.624888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.624905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.624922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.624937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.628378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.629842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.631406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.632302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.632594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.634122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.635645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.636859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.637217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.637629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.637645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.637660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.637675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.641064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.642585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.643288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.644566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.644823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.646488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.647951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.648309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.648668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.649083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.649100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.649114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.649134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.652319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.653046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.654436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.655964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.656214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.657833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.658200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.658556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.658915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.659321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.659337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.659353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.659367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.661908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.663472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.664941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.666519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.666775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.667325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.667689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.668046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.668401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.668741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.668757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.668771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.668785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.076 [2024-06-10 19:19:11.671335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.672617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.674114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.675616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.675983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.676374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.676737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.677095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.677811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.678079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.678095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.678109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.678123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.680859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.682369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.683884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.684764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.685183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.685551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.685915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.686273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.687707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.687957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.687972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.687986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.687999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.691078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.692611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.693806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.694164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.694573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.694947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.695303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.696880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.698398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.698653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.698669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.698683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.698698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.701793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.703248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.703612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.703969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.704366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.704738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.706040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.707317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.708818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.709068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.709083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.709097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.709111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.712115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.712482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.712844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.713200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.713607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.714518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.715804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.717315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.718833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.719134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.719150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.719165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.719184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.721238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.721607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.721963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.722318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.722684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.723960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.725476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.726998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.728133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.728388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.728404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.728417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.728431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.730254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.730626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.730984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.077 [2024-06-10 19:19:11.731557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.731810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.733333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.734984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.736537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.737654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.737939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.737955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.737968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.737982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.739974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.740334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.740696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.742215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.742471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.743985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.745485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.746190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.747461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.747717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.747732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.747746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.747760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.749874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.750233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.751623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.752905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.753158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.754702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.755427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.756852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.758406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.758659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.758675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.758689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.758703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.761058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.762159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.763433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.764961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.765210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.766243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.767885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.769390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.771038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.771288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.771303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.771317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.771331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.774505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.775794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.777310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.778830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.779131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.780562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.781893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.783409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.784994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.785411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.785427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.785440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.078 [2024-06-10 19:19:11.785454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.788854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.790359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.791870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.792828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.793077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.794359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.795871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.797385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.797835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.798279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.798296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.798310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.798327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.801778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.803296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.804463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.805931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.806223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.807776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.809295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.809942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.810300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.810705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.810721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.810736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.810751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.814179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.815662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.816820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.818102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.818351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.819894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.820851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.821225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.821585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.822049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.822066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.822082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.822097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.825245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.079 [2024-06-10 19:19:11.826280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.827592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.829135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.829392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.830186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.830545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.830907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.831267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.831634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.831650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.831663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.831677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.833982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.835258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.836768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.838291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.838652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.839028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.839383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.839743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.840371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.840628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.840645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.840659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.840673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.843428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.844953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.846482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.847356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.847761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.848127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.848483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.848845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.850300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.850554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.850569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.850587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.850601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.853670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.855194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.856411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.856776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.857203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.857568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.857928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.859459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.860854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.861104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.861120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.861134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.861148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.864202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.865870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.866231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.866591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.866970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.867335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.868137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.869678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.871202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.871498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.871514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.871528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.871543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.873360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.873728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.874083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.874438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.874797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.875183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.875542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.875901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.876258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.876663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.876679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.876693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.876706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.879253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.879622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.879986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.880344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.880759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.881128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.881483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.881860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.882225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.882690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.882707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.882722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.882736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.885222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.885590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.885948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.886306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.886668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.887046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.887402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.887765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.888123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.888537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.888553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.888569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.888589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.891049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.891414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.891784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.892147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.892570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.892943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.893299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.893663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.894033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.894497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.894512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.894526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.894540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.897041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.897402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.897766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.898123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.898535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.898908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.899270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.899634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.899991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.900422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.900444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.900459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.900475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.902961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.903321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.903688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.904053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.904391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.340 [2024-06-10 19:19:11.904764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.905121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.905475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.905839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.906177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.906194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.906208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.906223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.908850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.909216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.909262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.909629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.910044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.910424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.910789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.911150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.911518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.911954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.911971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.911987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.912002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.914464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.914837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.915196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.915238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.915659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.916027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.916388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.916753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.917114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.917532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.917548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.917565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.917585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.919705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.919748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.919786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.919824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.920793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.922852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.922897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.922951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.922990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.923996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.924010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.926843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.927180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.927196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.927210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.927223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.929428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.929471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.929509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.929548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.929916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.929974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.930491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.932879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.932933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.932971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.933989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.934005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.934019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.934033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.936933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.937366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.937382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.937396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.937410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.939536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.939605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.939651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.939689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.940698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.942929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.942971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.943638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.944038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.944054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.944068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.944082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.946867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.947279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.947296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.947311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.947325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.949451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.949494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.949534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.949593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.341 [2024-06-10 19:19:11.950619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.952796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.952838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.952875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.952913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.953838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.955783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.955826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.955868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.955907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.956898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.959791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.960153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.960168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.960182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.960196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.961584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.961626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.961664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.961707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.961953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.962409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.964832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.965252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.965268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.965283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.965298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.966710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.966751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.966791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.966828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.967788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.969379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.969422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.969461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.969498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.969926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.969977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.970586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.972996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.974484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.974526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.974564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.974606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.975625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.342 [2024-06-10 19:19:11.977805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.977845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.978226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.978241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.978254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.978268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.979693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.979735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.979774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.979811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.980910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.982753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.982804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.982842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.982879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.983616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.985780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.986198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.986215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.986229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.986244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.988970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.990983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.991037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.991086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.991559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.991581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.991596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.991611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.993538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.993591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.993629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.993666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.993910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.993970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.994368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.995897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.995938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.995976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.996847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:11.999891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.001962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.002357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.002372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.002385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.002400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.004675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.004726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.004763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.004801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.005563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.007094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.007135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.008789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.008834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.009714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.012040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.012085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.012123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.013659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.013909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.013964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.014002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.014039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.014076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.014479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.014495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.343 [2024-06-10 19:19:12.014508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.014522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.016356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.016725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.017084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.017441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.017701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.018987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.020501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.022023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.022739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.022990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.023007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.023021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.023035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.024907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.025265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.025624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.026808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.027095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.028623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.030134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.031049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.032658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.032909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.032924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.032938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.032951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.035130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.035494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.036394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.037673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.037924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.039475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.040670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.042112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.043415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.043671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.043687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.043701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.043715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.046062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.046685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.047962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.049479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.049736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.051229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.052409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.053678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.055185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.055438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.055453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.055467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.055480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.058125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.059444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.060920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.062435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.062690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.063618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.064892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.066409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.067924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.068236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.068256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.068270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.068284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.072493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.074145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.075689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.077113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.077418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.078695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.080208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.081694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.082644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.083047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.083062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.083076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.083091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.086474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.088008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.089659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.090697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.091000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.344 [2024-06-10 19:19:12.092569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.094142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.094857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.095214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.095626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.095642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.095657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.095671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.099123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.100579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.101795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.103145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.103402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.104949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.105712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.106077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.106439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.106920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.106938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.106953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.106967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.110196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.111293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.604 [2024-06-10 19:19:12.112580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.114099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.114351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.115389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.115754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.116110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.116465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.116894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.116910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.116924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.116938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.119017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.120310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.121818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.123325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.123593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.123965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.124333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.124695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.125054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.125306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.125322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.125335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.125349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.128413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.129806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.131333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.132134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.132596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.132963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.133329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.133751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.135096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.135349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.135364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.135378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.135391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.138445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.139986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.141096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.141458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.141873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.142242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.142602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.144032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.145359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.145614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.145635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.145648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.145662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.148781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.150182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.150540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.150900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.151284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.151653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.153021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.154297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.155800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.156049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.156064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.156078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.156091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.159136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.159505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.159869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.160227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.160669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.161404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.162691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.164203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.165728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.166047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.166064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.166078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.166092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.168124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.168487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.168851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.169205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.169532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.170814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.172332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.173851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.174882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.175134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.175149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.175163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.175177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.177017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.177385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.177753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.178274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.605 [2024-06-10 19:19:12.178524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.180002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.181571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.183238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.184258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.184583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.184599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.184613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.184627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.186684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.187062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.187503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.188827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.189079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.190597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.192240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.193182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.194456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.194715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.194731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.194744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.194758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.196834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.197200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.198710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.200097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.200346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.201892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.202596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.203899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.205417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.205671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.205686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.205700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.205714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.208131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.209618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.210980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.212502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.212759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.213470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.214799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.216311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.217849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.218100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.218116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.218135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.218149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.221406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.222697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.224115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.224831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.225081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.226393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.227811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.228261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.228624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.228992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.229008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.229022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.229036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.232278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.233326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.234551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.235826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.236077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.237627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.238502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.238866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.239225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.239606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.239622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.239636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.239650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.242139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.242498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.242859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.243227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.243585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.243954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.244312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.244673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.245029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.245440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.245456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.245470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.245484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.248167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.248535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.248904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.249264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.249715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.250086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.250445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.606 [2024-06-10 19:19:12.250813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.251175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.251596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.251613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.251626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.251641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.254167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.254533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.254897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.255258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.255677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.256052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.256412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.256777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.257137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.257462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.257478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.257493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.257507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.260026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.260392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.260759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.261135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.261552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.261924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.262280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.262650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.263013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.263483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.263501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.263516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.263530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.266007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.266372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.266732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.267086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.267466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.267842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.268207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.268565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.268928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.269339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.269356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.269373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.269394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.271818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.272179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.272539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.272907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.273334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.273713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.274087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.274446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.274814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.275160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.275176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.275190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.275204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.277834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.278216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.278579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.278938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.279371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.279746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.280114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.280490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.280856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.281242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.281258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.281273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.281286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.283794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.284158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.284517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.284886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.285279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.285655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.286012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.286382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.286745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.287093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.287109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.287123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.287137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.289644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.290017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.290396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.290760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.291130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.291494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.291857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.607 [2024-06-10 19:19:12.292221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.292584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.292992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.293009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.293024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.293041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.295511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.295881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.295925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.296296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.296711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.297079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.297451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.297823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.298187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.298682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.298701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.298715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.298729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.301247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.301611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.301971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.302017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.302375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.302752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.303112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.303470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.303836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.304171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.304187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.304201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.304215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.306459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.306502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.306541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.306583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.306987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.307587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.309733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.309776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.309814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.309851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.310842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.313715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.314165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.314182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.314196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.314211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.316968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.317005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.317257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.317273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.317287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.317301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.319711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.319767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.319805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.319859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.608 [2024-06-10 19:19:12.320825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.320839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.320853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.322995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.323852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.325993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.326341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.326356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.326370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.326385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.328644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.328685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.328726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.328763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.329527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.331991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.332005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.334867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.335113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.335128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.335141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.335155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.336667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.336709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.336746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.336783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.337030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.337087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.337125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.609 [2024-06-10 19:19:12.337168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.337209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.337458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.337473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.337487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.337501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.339712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.339755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.339797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.339835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.340548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.342967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.345815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.346103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.346119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.346132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.346146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.347716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.347764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.347801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.347838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.348537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.350618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.350666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.350707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.350745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.351624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.353156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.353198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.353243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.353281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.353531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.610 [2024-06-10 19:19:12.353596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.353642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.353683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.353720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.353969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.353985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.353998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.354012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.356876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.357156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.357171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.357185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.611 [2024-06-10 19:19:12.357198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.358824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.358875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.358917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.358954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.359675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.361785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.361828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.361871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.361909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.362817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.364903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.365151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.365167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.365180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.365193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.367796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.368042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.368057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.368070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.368084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.369657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.369716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.369755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.369792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.370089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.370148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.370186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.871 [2024-06-10 19:19:12.370224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.370261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.370505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.370521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.370534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.370548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.372565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.372612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.372655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.372692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.373593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.375804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.376048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.376063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.376077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.376090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.378763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.379105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.379121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.379134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.379148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.380652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.380695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.380733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.380773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.381498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.383982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.384385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.384402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.384416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.384431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.385888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.385936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.385975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.386864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.388594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.872 [2024-06-10 19:19:12.388636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.388681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.388719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.389682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.391170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.391211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.391955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.391999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.392718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.394567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.394613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.394652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.395922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.398737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.400272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.401794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.402538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.402972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.403342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.403710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.404070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.405571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.405826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.405842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.405855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.405869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.408940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.410462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.411566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.411930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.412339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.412712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.413072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.414455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.415739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.415990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.416005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.416023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.416036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.419233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.420757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.421126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.421484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.421852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.422223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.423273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.424549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.426067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.426316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.426332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.426345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.426359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.429408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.429783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.430144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.430504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.430931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.431672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.432941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.434453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.435974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.436256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.436272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.436286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.436300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.438446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.438821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.439180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.873 [2024-06-10 19:19:12.439543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.439949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.441439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.443042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.444553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.445920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.446207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.446222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.446236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.446250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.448147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.448509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.448889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.449267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.449516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.450846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.452349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.453888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.454749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.455053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.455068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.455082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.455095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.457088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.457452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.457813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.459319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.459691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.461211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.462728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.463436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.464712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.464961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.464976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.464991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.465004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.467207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.467574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.468947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.470200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.470451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.471985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.472780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.474286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.475916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.476165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.476180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.476194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.476207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.478539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.479606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.480884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.482404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.482658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.483746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.485315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.486765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.488344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.488598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.488614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.488628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.488646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.491702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.492978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.494477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.495970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.496267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.497653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.498929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.500433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.501950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.502303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.502319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.502333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.502347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.505830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.507345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.508859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.509881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.510131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.511397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.512926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.514443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.874 [2024-06-10 19:19:12.514972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.515408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.515424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.515440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.515455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.519033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.520555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.521812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.523188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.523476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.525017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.526545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.527224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.527593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.528010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.528026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.528039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.528053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.531603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.533123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.534258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.535546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.535799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.537348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.538314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.538689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.539048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.539476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.539493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.539508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.539522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.542641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.543365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.544647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.546147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.546397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.547838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.548198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.548555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.548920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.549326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.549343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.549357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.549371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.551643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.553134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.554731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.556252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.556503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.556877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.557237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.557598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.557958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.558283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.558299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.558313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.558327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.561221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.562532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.564050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.565599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.566010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.566380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.566747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.567106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.567539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.567795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.567811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.567825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.567839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.570870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.571873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.572236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.572600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.573035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.573404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.574930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.576337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.577902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.578154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.578169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.578184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.578197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.581467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.581842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.582207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.582566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.875 [2024-06-10 19:19:12.583005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.583375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.583745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.584107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.584463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.584831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.584847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.584861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.584875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.587338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.587711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.588071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.588436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.588800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.589181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.589537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.589909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.590266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.590641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.590658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.590672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.590687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.593285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.593683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.594049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.594410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.594757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.595128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.595487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.595855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.596220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.596630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.596650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.596666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.596681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.599156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.599521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.599889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.600250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.600641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.601011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.601373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.601736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.602118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.602535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.602552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.602567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.602589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.605136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.605502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.605878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.606244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.606685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.607055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.607412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.607776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.608142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.608511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.608527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.608542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.608556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.611100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.611469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.611833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.612193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.612616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.612985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.613343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.613709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.614069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.614438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.614454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.614468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.876 [2024-06-10 19:19:12.614482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.616932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.617300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.617668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.618033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.618370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.618747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.619107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.619467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.619852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.620197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.620213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.620227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.620241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.623639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.624011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.624371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:57.877 [2024-06-10 19:19:12.624755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.625071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.625983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.626351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.627606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.628112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.628525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.137 [2024-06-10 19:19:12.628544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.628558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.628573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.630841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.632448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.632824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.633181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.633544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.633931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.635481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.635864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.636495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.636762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.636779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.636793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.636807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.640127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.640493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.641438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.642270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.642680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.643052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.643413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.644497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.645191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.645610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.645627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.645641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.645659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.647921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.649353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.649725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.650085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.650333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.650868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.651229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.651616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.651984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.652235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.652255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.652269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.652283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.654763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.655135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.655704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.656897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.657311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.657748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.659093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.659453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.659820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.660260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.660276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.660290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.660305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.663275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.663647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.664008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.664367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.664667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.665683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.666042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.667086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.667816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.668223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.668241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.668255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.668269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.670477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.671870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.671916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.672277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.672690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.673056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.673419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.675033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.675403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.675829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.675847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.675862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.675877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.678302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.679629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.679990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.680036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.680374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.681675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.138 [2024-06-10 19:19:12.682034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.682393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.683851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.684188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.684204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.684218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.684232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.686797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.687232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.687249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.687263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.687279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.689515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.689562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.689606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.689644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.690695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.692873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.692916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.692953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.692991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.693862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.695987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.696229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.696245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.696258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.696272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.698349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.698404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.698454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.698493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.698914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.698966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.699465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.700983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.701027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.701069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.701110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.701358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.139 [2024-06-10 19:19:12.701417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.701841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.703683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.703727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.703766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.703805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.704833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.706861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.707107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.707123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.707136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.707150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.709644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.710043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.710060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.710075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.710089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.711557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.711606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.711644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.711683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.712543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.714990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.715392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.715410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.715424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.715439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.716993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.717924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.140 [2024-06-10 19:19:12.719568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.719617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.719662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.719701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.720810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.722979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.723227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.723243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.723257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.723271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.724834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.724878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.724917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.724955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.725907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.727649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.727691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.727731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.727769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.728523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.732563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.732637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.732688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.732727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.733727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.737599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.737645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.737683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.737727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.737980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.738454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.141 [2024-06-10 19:19:12.741870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.741885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.741899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.741916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.746854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.747255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.747272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.747287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.747302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.751818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.752060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.752076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.752090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.752103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.755848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.755895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.755932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.755977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.756713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.761344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.761392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.761445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.761495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.761988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.762567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.142 [2024-06-10 19:19:12.766887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.770773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.770820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.770859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.770898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.771633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.776694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.777116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.777133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.777148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.777163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.780638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.780685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.780723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.780760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.781594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.785315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.785362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.785855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.785899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.785937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.786184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.786200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.786214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.815369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.818012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.818071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.818404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.818452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.820038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.820285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.820345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.821857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.821908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.823470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.823527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.824537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.824599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.825830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.826079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.826094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.826107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.826121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.828668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.829112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.830435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.831935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.832182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.833812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.834762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.836037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.837554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.837808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.837825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.837838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.837852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.840367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.841838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.843378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.844896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.143 [2024-06-10 19:19:12.845145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.846020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.847288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.848815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.850326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.850627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.850643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.850662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.850681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.854834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.856500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.858021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.859396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.859690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.860967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.862481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.864002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.864856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.865253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.865270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.865284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.865300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.868789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.870444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.871980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.873114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.873400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.874956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.876483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.877464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.877841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.878251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.878268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.878283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.878300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.881852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.883470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.884425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.885705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.885957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.887536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.888622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.888988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.889355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.889839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.889860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.889875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.144 [2024-06-10 19:19:12.889893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.892921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.894308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.895597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.897116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.897366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.898105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.898465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.898827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.405 [2024-06-10 19:19:12.899187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.899554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.899570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.899591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.899605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.902219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.903525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.905059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.906586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.906947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.907317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.907683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.908039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.908928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.909215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.909230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.909245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.909258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.912044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.913558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.915076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.915512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.915946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.916310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.916671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.917344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.918619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.918867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.918883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.918897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.918910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.921917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.923430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.924158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.924517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.924937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.925315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.925693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.927083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.928612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.928860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.928875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.928889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.928903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.932011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.932929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.933301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.933663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.934091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.934454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.935996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.937643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.939307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.939643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.939659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.939672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.939686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.942083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.942446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.943786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.944215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.944628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.945977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.947243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.948757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.950269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.950630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.950646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.950660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.950674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.952446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.952811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.953168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.953523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.406 [2024-06-10 19:19:12.953790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.955080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.956619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.958142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.958688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.958936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.958952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.958965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.958979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.960896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.961257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.961616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.961977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.962474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.962859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.963220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.963572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.963935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.964255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.964272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.964286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.964300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.966850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.967219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.967590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.967967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.968374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.968741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.969099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.969465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.969839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.970272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.970289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.970303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.970318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.972879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.973247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.973610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.973966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.974326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.974705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.975066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.975420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.975782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.976183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.976200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.976215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.976230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.978710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.979072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.979443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.979810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.980229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.980603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.980974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.981331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.981698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.982038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.982053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.982067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.982081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.984857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.985227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.985589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.985947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.986362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.986746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.987115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.987479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.987842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.988218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.988234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.988248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.988262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.990662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.991026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.991385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.991751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.992092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.992461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.992826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.993182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.993541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.993918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.407 [2024-06-10 19:19:12.993934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.993948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.993963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.996478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.996852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.997217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.997583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.997982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.998351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.998715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.999088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.999452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.999871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.999888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.999903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:12.999919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.002359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.002732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.003090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.003445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.003796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.004166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.004525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.004889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.005244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.005656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.005673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.005688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.005702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.008196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.008555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.008927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.009291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.009739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.010106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.010475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.010840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.011205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.011587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.011608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.011621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.011636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.014278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.014658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.015019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.015375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.015795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.016163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.016529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.016895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.017255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.017621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.017638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.017651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.017666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.020111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.020469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.020831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.021189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.021541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.021917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.022275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.022636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.022993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.023369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.023387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.023401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.023415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.025963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.026332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.026705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.027065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.027453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.027824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.408 [2024-06-10 19:19:13.028182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.028543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.028926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.029344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.029362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.029377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.029391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.031885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.032247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.032611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.032968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.033295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.034500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.035430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.036266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.036628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.037052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.037069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.037083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.037098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.039697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.040061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.040418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.040784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.041144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.041515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.041891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.042246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.042612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.042946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.042961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.042975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.042988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.045550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.045603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.046878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.048398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.048653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.049423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.049788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.050144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.050499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.050889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.050906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.050919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.050933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.053195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.054475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.055998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.056042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.056290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.057353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.057718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.058073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.058427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.058856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.058877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.058891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.058905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.064878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.064930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.066445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.066487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.066782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.067151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.067193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.067547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.067594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.068028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.068045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.068060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.068074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.071125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.071178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.072183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.072226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.072507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.074046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.074091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.075610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.075654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.409 [2024-06-10 19:19:13.075978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.075995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.076009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.076023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.080207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.080266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.081793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.081837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.082084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.083632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.083674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.084951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.084994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.085282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.085298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.085311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.085325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.087355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.087400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.087761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.087801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.088166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.089483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.089526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.091042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.091086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.091333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.091349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.091362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.091376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.092932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.092974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.093952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.096896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.098957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.099299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.099315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.099333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.099347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.101809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.101853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.101891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.101928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.102708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.104221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.104263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.410 [2024-06-10 19:19:13.104302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.104350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.104603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.104648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.104704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.104742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.104779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.105027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.105044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.105058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.105072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.107854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.108100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.108116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.108129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.108143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.109657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.109699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.109737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.109774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.110470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.112721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.112764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.112803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.112840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.113645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.115993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.116007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.411 [2024-06-10 19:19:13.118744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.118781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.118819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.119135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.119151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.119165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.119182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.120671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.120712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.120749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.120786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.121556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.123541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.123589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.123627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.123665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.124514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.126921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.128945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.128992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.129952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.131979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.132021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.132265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.132281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.132294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.132308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.412 [2024-06-10 19:19:13.134874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.134912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.135299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.135314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.135328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.135342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.136772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.136820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.136859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.136897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.137649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.139986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.140395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.140411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.140426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.140440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.141880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.141922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.141959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.141997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.142869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.144473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.144515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.144554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.144600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.145687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.147818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.148068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.148084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.148098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.148112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.149610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.149652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.149693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.149730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.150136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.413 [2024-06-10 19:19:13.150184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.150727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.152968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.153302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.153318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.153331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.153345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.154828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.154876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.154914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.154969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.155991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.156005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.156019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.157809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.157855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.157892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.157930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.158185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.158248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.158291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.158332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.414 [2024-06-10 19:19:13.158372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.158855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.158881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.158899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.158917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.160407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.160455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.160970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.161011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.161050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.161457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.161476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.161496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.673 [2024-06-10 19:19:13.163376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.164995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.182398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.185204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.185260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.185599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.186046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.186383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.186424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.186471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.187544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.187847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.189324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.189376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.190642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.190684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.190730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.192987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.193033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.193366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.193765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.193782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.193796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.196980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.197958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.199619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.201162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.201414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.201429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.202972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.203347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.203711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.204077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.204490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.204515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.204530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.207327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.208665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.209927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.211434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.211697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.211714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.212432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.212795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.213152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.213507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.213886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.213902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.213916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.216169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.217429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.218964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.220489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.220826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.220845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.221215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.221573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.221938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.222490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.222748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.222765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.222779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.225521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.227056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.228627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.229506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.229934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.229951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.230315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.230688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.231075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.232458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.232719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.232735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.232749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.235801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.237321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.238519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.238881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.239297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.239314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.239686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.240044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.241652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.243137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.243388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.674 [2024-06-10 19:19:13.243404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.243417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.246431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.248093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.248454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.248816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.249177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.249193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.249555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.251047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.252568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.254094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.254351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.254367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.254381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.256834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.257197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.257555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.257917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.258346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.258362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.259715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.261216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.262803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.263376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.263632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.263649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.263663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.265815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.266178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.267462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.268731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.268982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.268997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.270531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.271309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.272788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.274389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.274649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.274665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.274683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.276984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.277929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.279203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.280759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.281010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.281026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.282110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.283349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.284637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.286185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.286438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.286454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.286467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.288919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.289288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.289664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.290025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.290378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.290394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.290761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.291122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.291485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.291857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.292285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.292302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.292317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.294760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.295124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.295481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.295848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.296279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.296295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.296672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.297047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.297404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.297765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.298147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.298163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.298177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.300743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.301109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.301474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.301844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.302261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.302278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.302649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.303005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.303365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.303730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.304123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.304139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.304153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.675 [2024-06-10 19:19:13.306659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.307022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.307377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.307740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.308176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.308193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.308558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.308922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.309303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.309661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.310114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.310131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.310145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.312592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.312956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.313315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.313689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.314051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.314067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.314432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.314794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.315152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.315513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.315949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.315965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.315980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.318516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.318889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.319251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.319611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.320002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.320018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.320378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.320743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.321104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.321461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.321876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.321894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.321908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.324422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.324792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.325149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.325510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.325976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.325992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.326362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.326725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.327082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.327434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.327829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.327846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.327860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.330325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.330696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.331056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.331416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.331821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.331839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.332198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.332551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.332914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.333277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.333646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.333663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.333677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.336159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.336525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.336886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.337242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.337653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.337670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.338035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.338398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.338778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.339138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.339568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.339590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.339605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.342039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.342398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.342762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.343135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.343614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.343631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.343997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.344353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.344716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.345075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.345448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.345467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.345481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.347995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.348363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.676 [2024-06-10 19:19:13.348726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.349082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.349466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.349482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.349850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.350210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.350569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.350937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.351361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.351378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.351392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.353841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.354199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.354569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.354932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.355285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.355301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.355677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.356049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.356403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.356763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.357141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.357158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.357171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.359224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.359593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.359952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.360309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.360675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.360691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.361060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.361416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.361776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.362137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.362458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.362474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.362489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.365022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.365384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.365760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.367334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.367803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.367819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.368181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.369797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.370153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.370768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.371018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.371035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.371049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.373791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.375317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.376833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.377880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.378291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.378307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.378675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.379032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.379386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.380995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.381243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.381258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.381272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.384249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.385908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.387428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.388474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.388815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.388835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.389205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.390279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.390978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.391338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.391595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.391610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.391624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.394614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.396054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.397627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.397688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.397938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.397953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.398321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.398684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.399039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.399398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.399689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.399705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.399719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.402436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.402483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.403823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.403865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.677 [2024-06-10 19:19:13.404115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.404131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.405665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.406280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.406322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.407336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.407749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.407765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.407780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.410905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.410951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.412466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.413993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.414288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.414303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.415744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.415790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.417282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.418893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.419146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.419161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.419175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.421466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.422325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.678 [2024-06-10 19:19:13.423607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.423650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.423900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.423915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.423968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.425492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.426277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.426325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.426604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.426630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.679 [2024-06-10 19:19:13.426651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.429208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.429590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.429636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.431021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.431426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.431442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.431816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.433292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.433343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.434914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.435165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.435180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.435194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.438249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.438300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.439532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.439893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.440308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.440325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.440693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.440737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.441093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.442678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.442929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.442945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.442958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.444488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.446039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.447547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.447594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.447844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.447860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.447919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.449116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.449690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.449732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.450150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.450169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.450184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.451935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.451976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.452799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.454882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.455301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.455318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.455331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.457975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.458014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.458051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.458089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.458331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.458346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.458360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.459876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.459935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.459977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.940 [2024-06-10 19:19:13.460771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.460784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.462586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.462627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.462670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.462714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.463735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.465996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.467738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.467782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.467821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.467859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.468835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.470991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.471278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.471294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.471308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.472786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.472827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.472865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.472902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.473935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.475733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.475775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.475821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.475858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.476553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.478632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.479057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.479073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.479087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.481227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.941 [2024-06-10 19:19:13.481269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.481816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.482059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.482074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.482088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.483617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.483668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.483705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.483742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.483986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.484496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.486996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.487035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.487374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.487390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.487403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.488800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.488842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.488881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.488919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.489628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.491964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.492002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.492042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.492081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.492488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.492505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.492524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.493925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.493967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.494943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.496418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.496459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.496500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.496537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.496975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.496993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.497573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.499346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.499392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.499429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.942 [2024-06-10 19:19:13.499470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.499721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.499736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.499791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.499829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.499866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.499903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.500146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.500161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.500175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.501715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.501756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.501794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.501836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.502670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.504844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.504885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.504922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.504959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.505727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.507797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.508177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.508194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.508208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.510744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.511075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.511090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.511108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.512960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.513004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.513043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.513087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.513336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.513352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.513366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.515857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.516260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.516277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.516291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.517701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.517742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.517783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.518512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.518801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.518818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.943 [2024-06-10 19:19:13.518871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.518909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.518947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.518983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.519228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.519243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.519257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.520796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.521159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.521200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.522810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.523255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.523271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.523330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.523371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.523876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.523919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.524172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.524188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.524202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.525701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.526977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.527020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.527058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.527305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.527321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.527374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.528899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.528946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.528983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.529446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.529462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.529475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.533229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.533278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.533315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.534842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.535093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.535108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.536539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.536586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.536625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.538211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.538499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.538514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.538528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.540949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.541969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.542010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.542417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.542434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.542448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.546690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.548056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.548100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.548146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.548395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.548411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.548456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.550087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.550129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.550168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.550564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.550585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.550600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.553974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.554020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.554058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.555560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.555814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.555830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.944 [2024-06-10 19:19:13.556712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.556757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.556794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.558066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.558315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.558330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.558344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.563630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.563997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.565233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.566501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.566755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.566775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.568328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.569134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.570627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.572239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.572488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.572504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.572517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.574845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.575709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.576992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.578499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.578754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.578770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.579979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.581431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.582758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.584282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.584532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.584547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.584561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.588201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.589485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.591003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.592517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.592783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.592799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.594020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.595295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.596819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.598344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.598780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.598795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.598810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.602541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.604108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.605620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.606972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.607269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.607284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.608563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.610074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.611630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.612268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.612517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.612533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.612547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.616286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.617696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.618407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.619678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.619928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.619943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.621606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.621965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.622320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.622680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.623100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.623117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.623132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.625638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.627236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.628725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.630311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.630563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.630583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.631194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.632469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.632832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.633708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.633964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.633980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.633994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.638499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.639774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.641289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.642808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.643164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.643180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.643563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.643925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.945 [2024-06-10 19:19:13.644281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.644991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.645256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.645272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.645286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.648204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.649758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.651281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.652513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.652849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.652872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.654093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.654450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.655378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.656225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.656629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.656650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.656665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.659981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.660353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.661899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.662256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.662657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.662673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.664281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.664643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.665002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.665370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.665870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.665886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.665900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.668468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.669276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.670257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.670618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.670920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.670936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.671957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.672314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.672676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.673041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.673413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.673429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.673443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.676912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.677277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.678790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.679155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.679566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.679588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.679953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.680316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.680679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.681036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.681460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.681476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.681492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.683697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.684056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.685508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.685869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.686244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.686260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.686636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.687011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.687370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.687733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.688140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.688156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.688171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.692090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.692466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:58.946 [2024-06-10 19:19:13.692843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.693231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.693697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.693718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.694084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.694442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.694810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.695177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.695518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.695534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.695547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.697704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.698074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.698439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.698803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.699214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.699230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.699595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.699953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.700313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.700686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.700959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.700974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.700988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.704671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.705045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.705419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.705802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.706218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.706235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.706618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.707022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.708399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.708763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.709135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.709151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.709165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.711473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.711846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.712206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.712560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.712921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.712938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.713307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.714246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.715070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.715427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.715709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.715725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.715739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.719190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.719552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.719921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.720285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.720538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.720554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.721021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.721378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.723022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.723381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.723806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.207 [2024-06-10 19:19:13.723823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.723837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.726356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.726732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.727111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.727844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.728095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.728111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.728475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.729160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.730246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.730612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.730969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.730985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.730999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.734252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.735349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.736022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.736380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.736639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.736656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.737351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.737716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.738078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.738441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.738893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.738913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.738929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.741434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.743101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.743468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.743993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.744241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.744258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.744637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.744999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.745366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.745736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.746153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.746170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.746185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.749104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.749980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.750877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.751239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.751557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.751573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.752694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.753713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.754456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.754834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.755245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.755262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.755277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.758835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.759210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.759569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.761118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.761566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.761588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.761956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.762322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.762750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.764087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.764534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.764551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.764566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.768757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.769123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.770146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.770889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.771313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.771330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.772478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.773761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.775274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.776798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.777151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.777168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.777183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.779116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.780476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.780842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.781632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.781884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.781900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.782271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.783130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.784409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.785931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.208 [2024-06-10 19:19:13.786181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.786201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.786215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.791865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.792240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.792727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.794010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.794439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.794462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.795074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.796363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.797881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.799393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.799654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.799671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.799685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.802157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.803616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.803983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.804026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.804432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.804448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.806092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.806459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.807042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.808326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.808581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.808598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.808612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.814084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.814137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.815671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.815726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.816162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.816180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.816544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.818185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.818229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.818589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.818965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.818980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.818994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.821073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.821119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.822387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.823912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.824160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.824175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.825363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.825406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.826894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.827259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.827667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.827684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.827698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.832895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.834500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.835515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.835560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.835852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.835868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.835921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.837446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.838967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.839010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.839344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.839360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.839373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.841579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.841942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.841985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.843262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.843511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.843526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.845191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.846733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.846777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.848044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.848353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.848369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.209 [2024-06-10 19:19:13.848382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.854924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.854981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.855341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.855974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.856223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.856240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.857899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.857946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.859454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.860727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.861030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.861046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.861064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.862535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.863893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.864309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.864351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.864770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.864789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.864835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.866483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.866852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.866893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.867289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.867305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.867318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.871630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.871676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.871713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.871750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.871997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.872447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.874986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.875024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.875436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.875453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.875470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.879875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.879936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.879985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.880730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.882596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.882638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.882676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.882716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.882964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.882978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.883612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.210 [2024-06-10 19:19:13.887936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.888186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.888202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.888215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.889955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.890972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.894990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.895232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.895248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.895262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.896839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.896882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.896920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.896960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.897844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.901821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.902069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.902085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.902099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.903641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.903683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.903724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.903762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.904628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.908847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.909121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.909137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.909151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.910645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.910686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.910723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.910771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.911190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.911206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.211 [2024-06-10 19:19:13.911253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.911292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.911331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.911370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.911750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.911767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.911780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.916962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.918983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.919020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.919060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.919470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.919486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.919501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.924801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.925142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.925158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.925171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.926629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.926672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.926710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.926749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.926997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.927652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.930894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.930940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.212 [2024-06-10 19:19:13.930978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.931862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.933862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.934279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.934295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.934310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.936623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.936671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.936719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.936757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.937515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.938930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.938971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.939889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.942774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.943022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.943037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.943050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.944662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.944707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.944748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.944785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.945521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.947983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.948840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.950374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.950416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.950453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.950490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.950740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.213 [2024-06-10 19:19:13.950756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.950806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.950844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.950886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.950927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.951318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.951334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.951348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.955961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.956005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.956257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.956272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.956286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.957835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.957882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.957928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.959503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.959863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.959881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.959941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.959990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.960031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.960069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.214 [2024-06-10 19:19:13.960346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.960367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.960383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.962765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.964300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.964345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.965862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.966218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.966234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.966291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.966330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.967607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.967648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.967896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.967912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.967926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.969471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.969839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.969884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.969923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.970181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.970197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.970246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.970766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.970809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.970847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.971250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.971267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.971282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.977248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.977303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.977342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.978860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.979210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.979226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.980651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.980705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.980750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.981109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.981499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.981515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.981529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.985852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.985900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.987172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.987214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.987464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.987480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.987533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.987572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.988926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.988968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.989217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.989233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.989246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.992826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.993195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.993237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.993279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.993574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.993595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.993644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.994906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.474 [2024-06-10 19:19:13.994948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:13.994985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:13.995230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:13.995246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:13.995259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.001043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.001098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.001144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.001503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.001920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.001937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.003591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.003641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.003678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.004034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.004394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.004413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.004427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.010002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.011527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.012403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.014041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.014482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.014498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.014875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.016515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.016886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.017477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.017735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.017752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.017766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.023167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.024357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.025681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.026133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.026538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.026555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.027882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.028334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.028695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.030211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.030465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.030481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.030494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.036388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.037356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.038145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.038504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.038779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.038795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.039585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.039945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.041077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.042599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.042851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.042867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.042880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.046914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.047541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.048703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.049060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.049390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.049406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.050685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.052200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.053715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.054677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.054930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.054946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.054959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.058775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.060241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.060608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.061483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.061776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.061792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.063322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.064840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.066005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.067455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.067762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.067778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.067792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.074453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.074837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.075454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.076733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.076984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.077001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.078541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.079945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.080822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.082095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.475 [2024-06-10 19:19:14.082347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.082362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.082376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.087726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.088099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.088460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.088829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.089081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.089097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.089476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.089837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.091388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.091749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.092147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.092163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.092181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.094882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.095249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.095624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.096291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.096541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.096558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.096935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.097563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.098713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.099070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.099437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.099455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.099471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.102184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.102552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.102922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.104083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.104417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.104432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.104806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.105927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.106580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.106935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.107297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.107313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.107327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.110061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.110432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.110816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.112443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.112912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.112929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.113293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.114956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.115316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.115684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.116057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.116072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.116087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.119260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.119638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.120228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.121418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.121827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.121844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.122412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.123632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.123989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.124347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.124714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.124731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.124744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.129070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.129439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.130686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.131052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.131457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.131473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.133125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.133492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.133860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.134230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.134591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.134609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.134622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.139621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.140310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.141402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.141768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.142088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.142103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.143205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.143563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.143924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.144286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.144580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.144596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.476 [2024-06-10 19:19:14.144609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.148405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.149642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.150187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.150545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.150800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.150816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.151416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.151778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.152154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.152519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.152778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.152795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.152816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.155917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.157366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.157738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.158101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.158352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.158368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.158829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.159188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.159551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.159916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.160168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.160184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.160198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.163150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.164767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.165135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.165509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.165765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.165781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.166149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.166507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.166876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.167320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.167572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.167593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.167606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.170477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.172080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.172435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.172894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.173148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.173164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.173534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.173896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.174263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.174804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.175054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.175070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.175084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.177977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.179420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.179784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.180424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.180679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.180696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.181068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.181430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.181803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.182514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.182771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.182788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.182801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.185790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.187060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.187420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.188200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.188463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.188480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.188854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.189215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.189585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.190536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.190842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.190858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.190872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.194035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.195050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.195408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.196421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.196740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.196756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.197127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.197668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.198889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.199724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.200011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.200027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.200041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.204927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.477 [2024-06-10 19:19:14.205295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.206629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.207061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.207473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.207493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.208714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.209265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.209623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.209987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.210356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.210372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.210387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.216166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.216557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.218197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.218553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.218962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.218979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.220599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.220963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.221321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.221693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.222143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.222159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.222173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.478 [2024-06-10 19:19:14.227985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.229518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.231053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.231898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.232152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.232167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.233441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.234957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.236484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.237290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.237550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.237567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.237587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.241430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.242963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.243941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.245614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.245871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.245890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.247416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.248942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.249523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.250740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.251147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.251164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.251179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.255835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.257183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.258486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.259717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.259971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.259986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.261515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.262328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.263961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.264314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.264720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.264737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.264750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.270808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.271727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.273001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.273045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.273295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.273310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.274855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.275826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.277363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.277740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.278165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.278181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.278196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.284651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.284709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.285663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.285705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.285989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.286005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.287538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.289051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.289093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.289855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.290109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.290124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.290138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.293806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.293857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.295449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.297048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.297411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.297426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.298705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.738 [2024-06-10 19:19:14.298750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.300267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.301794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.302086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.302102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.302117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.307680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.309211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.310828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.310871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.311122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.311137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.311197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.312633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.313832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.313875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.314157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.314172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.314186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.321129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.321505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.321547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.322302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.322586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.322602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.324154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.325663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.325706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.326852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.327107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.327123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.327137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.331059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.331113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.332556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.332919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.333302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.333322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.334663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.334707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.336228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.337753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.338018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.338035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.338049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.342572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.342949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.343440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.343483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.343742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.343760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.343812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.344170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.345111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.345154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.345452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.345469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.345483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.350930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.354972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.739 [2024-06-10 19:19:14.359899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.360294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.360311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.360325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.363979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.364934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.369997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.370010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.373969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.374012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.374267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.374282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.374296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.378609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.378656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.378694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.378731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.379790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.384999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.385015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.385029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.388892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.389142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.389157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.389171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.393990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.394030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.394084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.394122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.394585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.740 [2024-06-10 19:19:14.394602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.394617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.398951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.402756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.402802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.402840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.402883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.403782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.407986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.408338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.408354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.408367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.411809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.412128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.412144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.412157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.416897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.416945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.416987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.417635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.418044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.418084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.418097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.422796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.423043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.423058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.423072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.425669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.425716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.425753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.425790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.426511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.430809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.741 [2024-06-10 19:19:14.430858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.430901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.430947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.431939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.436843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.437086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.437102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.437115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.440973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.441010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.441048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.441293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.441309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.441323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.445985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.446024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.446452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.446472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.446486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.449920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.449967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.450799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.454870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.454916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.454955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.454992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.742 [2024-06-10 19:19:14.455740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.460455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.460503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.460542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.460605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.461624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.465939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.469974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.470022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.470068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.471876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.472125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.472141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.472154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.476383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.476759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.476804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.477160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.477596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.477614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.477664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.477703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.478862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.478906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.479188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.479204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.479217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.483633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.485195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.485240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.485278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.485698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.485714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.485760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.486117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.486158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.486212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.486663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.486680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.486697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.491820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:59.743 [2024-06-10 19:19:14.491878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.491921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.492594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.493037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.493055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.493423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.493466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.493505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.493866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.494215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.494231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.494245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.498202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.498249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.499778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.499830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.500995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.501016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.501030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.504351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.505094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.505140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.505177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.505435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.505452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.505521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.507040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.507083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.507121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.507368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.507384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.507397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.511976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.512031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.512078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.513602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.513857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.513873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.514777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.514821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.514859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.516130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.516381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.516396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.516410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.521883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.523437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.524966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.526201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.003 [2024-06-10 19:19:14.526538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.526554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.527837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.529379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.530905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.531846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.532270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.532286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.532300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.535535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.535907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.536269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.536641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.536991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.537007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.537372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.537735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.538093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.538458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.538827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.538843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.538858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.542077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.542446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.542809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.543172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.543520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.543536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.543910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.544270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.544633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.544991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.545364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.545380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.545394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.548655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.549023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.549380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.549758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.550172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.550189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.550554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.550927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.551301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.551663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.552127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.552144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.552158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.555391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.555770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.556131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.556489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.556896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.556913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.557280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.557644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.558007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.558365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.558794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.558811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.558826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.562225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.562604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.562974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.563341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.563792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.563809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.564171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.564525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.564890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.565256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.565649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.565666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.565680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.568869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.569233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.569599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.569962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.004 [2024-06-10 19:19:14.570310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.570330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.570724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.571081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.571437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.571806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.572291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.572307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.572321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.575500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.575871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.576233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.576596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.576953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.576969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.577338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.577703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.578059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.578424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.578848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.578866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.578881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.582171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.582543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.582910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.583268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.583690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.583707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.584075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.584436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.584801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.585162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.585555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.585571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.585592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.588772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.589152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.589512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.589879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.590275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.590292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.590665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.591027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.591389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.591767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.592209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.592226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.592241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.595527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.595901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.596260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.596643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.597093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.597110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.597475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.597853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.598211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.599861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.600218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.600234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.600248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.604614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.604998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.605353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.605713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.606129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.606146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.606511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.606880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.607245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.607606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.608068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.608085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.608099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.611326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.611704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.612064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.612419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.612825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.612842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.613205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.614554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.615827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.617336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.005 [2024-06-10 19:19:14.617592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.617608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.617622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.620642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.621017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.621373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.621736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.622154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.622172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.623249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.624534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.626052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.627563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.627904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.627921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.627934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.629832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.630193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.630549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.630910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.631236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.631252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.632531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.634060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.635583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.636449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.636706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.636722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.636736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.638556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.638924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.639279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.640120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.640436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.640452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.641993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.643500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.644702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.646145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.646449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.646464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.646478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.648546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.648910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.649499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.650783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.651033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.651049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.652640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.654078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.655289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.656565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.656821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.656836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.656850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.659174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.659606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.660946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.662464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.662720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.662737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.664398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.665411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.666687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.668212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.668464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.668479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.668493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.670979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.672396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.673922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.675441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.675698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.675714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.676700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.677967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.679477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.681004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.681415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.681432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.681447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.685476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.687038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.688551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.689885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.690204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.690219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.691484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.692989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.694483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.695263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.006 [2024-06-10 19:19:14.695720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.695738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.695752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.699335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.701022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.702522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.703686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.703971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.703986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.705533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.707069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.707851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.708217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.708639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.708656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.708671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.712163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.713705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.714828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.716101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.716352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.716367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.717910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.718841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.719211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.719569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.720028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.720045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.720060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.723121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.723963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.725236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.726757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.727007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.727022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.728253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.728617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.728972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.729341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.729770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.729791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.729806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.731910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.733188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.734697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.736203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.736456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.736472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.736850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.737206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.737562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.737923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.738170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.738185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.738199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.741284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.742939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.744485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.745916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.746318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.746334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.746706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.747063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.747419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.748825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.749106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.749121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.749135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.751977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.753537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.755111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.755164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.755630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.755651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.756026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.756389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.007 [2024-06-10 19:19:14.756759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.758282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.758542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.758558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.758572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.761685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.761735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.763233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.763276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.763599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.763615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.763985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.764340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.764393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.764755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.765168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.765185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.765199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.767446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.767499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.768777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.770283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.770535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.770551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.771562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.771635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.771991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.772347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.772795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.772813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.772828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.774447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.775708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.777140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.777184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.777462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.777477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.777529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.779060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.780591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.780643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.780981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.780997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.781012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.784394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.785920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.785964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.787481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.787851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.787868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.789420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.791097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.791142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.792649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.792898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.792917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.792936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.796448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.796496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.797912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.799442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.799701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.799717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.800425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.800469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.801746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.803263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.803513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.803529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.269 [2024-06-10 19:19:14.803543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.805668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.806030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.807566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.807632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.807878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.807894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.807940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.809574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.811058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.811101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.811422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.811438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.811451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.812950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.812993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.813720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.814095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.814111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.814125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.815935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.815976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.816855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.818973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.819386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.819402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.819416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.821949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.822260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.822275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.822290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.823704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.823746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.823783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.823829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.824159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.270 [2024-06-10 19:19:14.824174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.824795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.826676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.826718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.826758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.826795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.827520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.829647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.830062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.830079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.830094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.832886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.834990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.835032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.835476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.835493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.835510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.837973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.838011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.838052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.838089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.838332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.838348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.838361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.839904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.839945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.839983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.840909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.842993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.271 [2024-06-10 19:19:14.843035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.843842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.845941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.846304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.846320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.846335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.848978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.849016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.849053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.849295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.849312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.849327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.850882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.850924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.850961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.850999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.851809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.854961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.856493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.856535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.856572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.856621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.857649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.859982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.860020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.860064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.860105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.860351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.860367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.860381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.861906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.861947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.861985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.862022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.272 [2024-06-10 19:19:14.862375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.862391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.862445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.862495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.862534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.862572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.863024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.863040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.863055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.865878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.867961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.868006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.868459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.868477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.868493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.870962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.871000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.871038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.871075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.871320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.871337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.871350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.872888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.872930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.872967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.873917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.875992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.876843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.878982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.879353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.879369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.879384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.273 [2024-06-10 19:19:14.882099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.882175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.882220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.882588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.882976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.882993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.883612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.885792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.886158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.886199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.886553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.886963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.886981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.887853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.890266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.890641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.890689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.890747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.891199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.891217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.891273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.891636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.891678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.891717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.892120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.892136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.892151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.894636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.894684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.894739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.895094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.895513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.895529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.895910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.895958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.896024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.896389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.896855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.896873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.896887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.899994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.900046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.900400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.900441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.900839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.900856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.900870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.903189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.903552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.903600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.903640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.903969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.903984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.904995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.907521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.274 [2024-06-10 19:19:14.907570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.907615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.907986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.908363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.908379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.908748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.908790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.908829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.909186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.909538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.909554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.909569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.912105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.912468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.912846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.913203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.913684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.913701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.914067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.914430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.914816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.915176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.915550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.915567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.915587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.918092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.918452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.918822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.919182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.919520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.919536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.919915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.920270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.920633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.920989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.921349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.921368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.921383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.923936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.924304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.924677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.925039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.925408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.925425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.925794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.926154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.926517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.926883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.927317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.927334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.927352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.929745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.930107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.930463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.930828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.931201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.931217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.931595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.931965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.932321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.932683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.933062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.933078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.933093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.935600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.935966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.936326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.936694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.937100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.937117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.937481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.937843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.938203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.938563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.939001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.939018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.939033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.941543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.941915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.942272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.942634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.943047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.943063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.943441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.943812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.944168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.944528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.944953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.944978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.944994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.947442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.947810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.949057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.275 [2024-06-10 19:19:14.949571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.949827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.949843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.950449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.950815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.951171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.951528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.951958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.951974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.951988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.954477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.954847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.955210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.955567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.956048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.956065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.956431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.956799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.957162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.957535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.958001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.958018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.958033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.960547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.960915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.961272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.961634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.961964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.961981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.962350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.962716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.963074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.963431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.963857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.963874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.963891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.966188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.967755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.969411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.970945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.971194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.971209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.971574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.971941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.972297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.972664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.972953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.972968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.972982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.975730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.976997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.978518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.980038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.980382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.980398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.980780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.981143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.981502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.982278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.982566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.982591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.982605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.985355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.986890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.988409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.989193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.989636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.989653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.990016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.990382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.990871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.992158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.992407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.992422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.992435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.995490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.997018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.998166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.998526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.998943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.998960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.999326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:14.999689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.001099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.002390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.002644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.002660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.002678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.005686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.007249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.007622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.007980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.276 [2024-06-10 19:19:15.008350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.008366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.008735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.009776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.011051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.012579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.012831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.012846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.012860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.015850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.016497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.016865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.017222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.017670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.017687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.018164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.019456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.277 [2024-06-10 19:19:15.020993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.022550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.022877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.022895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.022909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.025156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.025519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.025880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.026238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.026613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.026629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.027965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.029473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.030984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.032218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.032514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.032530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.032543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.034341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.034707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.035075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.035438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.035711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.035730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.037074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.038597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.040209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.041138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.041433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.041448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.041462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.043369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.043738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.044097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.045651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.045939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.045954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.047497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.049012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.049714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.050995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.051244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.051260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.051274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.053377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.053743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.054781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.056060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.056309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.056325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.057864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.058895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.060445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.061854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.538 [2024-06-10 19:19:15.062106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.062121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.062135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.064433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.065082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.066356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.067866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.068118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.068133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.069596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.070785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.072058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.073566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.073820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.073835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.073849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.076301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.077909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.079490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.081140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.081388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.081404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.082118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.083390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.084908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.086427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.086709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.086726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.086740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.090359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.091672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.093186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.094733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.095124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.095140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.096448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.097969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.099486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.100665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.101020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.101035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.101050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.104327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.105855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.107370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.108118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.108371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.108387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.109746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.111264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.112873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.113237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.113657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.113676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.113690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.117108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.118622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.119848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.121254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.121539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.121554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.123075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.124589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.125263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.125624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.126040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.126057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.126071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.129587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.130694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.131920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.133191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.133441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.133457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.134999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.135844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.539 [2024-06-10 19:19:15.136203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.136563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.137019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.137036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.137050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.140082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.140820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.142089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.142134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.142382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.142397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.143939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.145090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.145450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.145816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.146200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.146216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.146229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.149373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.149421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.150128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.150170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.150420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.150436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.152074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.153658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.153700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.154985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.155387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.155403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.155417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.158690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.158736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.160209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.161710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.162052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.162069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.163635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.163686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.165327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.166902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.167152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.167167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.167181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.169392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.170275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.171552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.171601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.171851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.171867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.171920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.173442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.174252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.174295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.174545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.174560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.174574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.176389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.176753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.176794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.177148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.177489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.177508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.178791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.180308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.180351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.181868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.182153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.182169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.182183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.184441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.184502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.184870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.185228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.185671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.185688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.186053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.186099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.187370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.188885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.189135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.189151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.189165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.190702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.192231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.193246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.193290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.193732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.193748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.193797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.540 [2024-06-10 19:19:15.194151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.194508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.194553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.194958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.194974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.194987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.196972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.197263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.197279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.197292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.198953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.198996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.199661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.200078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.200094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.200109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.201607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.201653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.201690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.201727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.202511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.204707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.205076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.205092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.205105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.206810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.206851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.206888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.206925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.207735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.209814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.210212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.210229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.210243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.212151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.212192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.212229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.212272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.212522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.541 [2024-06-10 19:19:15.212536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.212597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.212643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.212683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.212720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.213028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.213044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.213057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.214709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.214754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.214792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.214830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.215877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.217968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.218006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.218252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.218268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.218282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.219794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.219848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.219895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.219933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.220911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.222620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.222661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.222698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.222735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.222978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.222992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.223532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.224928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.224970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.225622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.226036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.226052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.226066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.227915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.227963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.228001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.228037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.228279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.228293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.542 [2024-06-10 19:19:15.228344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.228382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.228424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.228471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.228723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.228738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.228753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.230915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.231364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.231385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.231399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.233942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.234188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.234203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.234216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.235762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.235803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.235840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.235877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.236692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.238770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.238810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.238852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.238889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.239623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.241977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.244184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.244226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.244265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.543 [2024-06-10 19:19:15.244306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.244660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.244676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.244735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.244778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.244817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.244855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.245241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.245257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.245270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.247680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.247735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.247772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.247824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.248744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.251701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.252135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.252151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.252169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.254996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.255046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.255084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.255122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.255532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.255548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.255563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.257688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.257748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.257797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.257835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.258828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.260931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.260974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.261683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.262061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.262076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.544 [2024-06-10 19:19:15.262090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.264970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.265343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.265358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.265372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.267421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.267463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.267505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.267866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.268901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.271156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.271516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.271558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.271920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.272354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.272370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.272431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.272484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.272852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.272898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.273300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.273316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.273330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.275361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.275725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.275766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.275805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.276215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.276234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.276280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.276639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.276680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.276719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.277136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.277152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.277171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.279673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.279725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.279764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.280121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.280474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.280490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.280869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.280917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.280956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.281309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.281682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.281698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.281712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.283883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.283925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.284280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.284320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.284667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.284684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.284739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.284778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.285133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.285173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.285489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.285505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.285519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.287748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.288126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.288176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.288230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.288601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.288618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.288674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.289038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.289082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.289122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.289550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.289572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.545 [2024-06-10 19:19:15.289595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.292046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.292102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.292142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.292496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.292935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.292952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.293314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.293374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.293413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.293785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.294144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.294160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.294174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.296900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.297265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.297630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.297988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.298399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.298415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.298788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.299152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.299517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.299883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.300247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.300263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.300277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.302792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.303154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.303509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.303887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.304271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.304289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.304673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.305042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.305403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.305767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.306103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.306119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.306133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.308626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.308990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.309366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.309745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.310159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.310176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.310543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.310907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.311272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.311640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.312084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.312101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.312115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.314601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.314967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.315325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.315687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.316097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.316114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.317536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.317902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.319477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.319843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.320247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.320263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.320277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.322708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.323073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.323430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.323790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.324146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.324162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.324531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.324893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.325250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.325607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.326011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.326028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.326043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.328384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.328756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.329116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.329478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.329863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.329880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.330248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.330610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.330971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.331332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.806 [2024-06-10 19:19:15.331740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.331765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.331779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.335708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.337060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.338567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.340146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.340619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.340636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.341923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.343442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.344967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.346107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.346450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.346466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.346480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.349846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.351365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.352886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.353727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.353979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.353994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.355280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.356801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.358320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.358689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.359103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.359119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.359134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.362691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.364219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.365536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.366870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.367157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.367173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.368708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.370222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.370979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.371340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.371750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.371768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.371783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.375250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.376784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.377497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.378778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.379030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.379046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.380592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.381997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.382357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.382725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.383072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.383088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.383102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.386367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.387294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.388892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.390488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.390746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.390762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.392297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.392666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.393025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.393382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.393789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.393806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.393820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.396976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.397982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.399257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.400781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.401033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.401049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.402123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.402485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.402845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.403215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.403640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.403658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.403673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.405799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.407085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.408608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.410124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.410376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.410396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.410774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.411133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.411490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.411850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.412128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.412143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.807 [2024-06-10 19:19:15.412157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.414926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.416208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.417719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.419229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.419582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.419598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.419966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.420325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.420689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.421444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.421759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.421775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.421789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.424566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.426097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.427622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.428222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.428677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.428695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.429061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.429426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.429831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.431197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.431454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.431470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.431483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.434543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.436065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.437310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.437672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.438089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.438106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.438475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.438840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.440117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.441395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.441652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.441668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.441682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.444748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.446332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.446703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.447064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.447415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.447431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.447805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.448628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.449904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.451409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.451662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.451678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.451691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.454715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.455601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.455976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.456334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.456751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.456768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.457133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.458784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.460293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.461922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.462175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.462191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.462204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.465159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.465524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.465886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.466245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.466670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.466687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.467829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.469109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.470632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.472147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.472506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.472521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.472535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.474461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.474827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.475187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.475546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.475924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.475940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.477248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.478777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.480291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.481466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.481733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.481750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.481763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.483649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.484013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.808 [2024-06-10 19:19:15.484385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.484749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.485001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.485016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.486398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.487910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.489545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.490505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.490800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.490816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.490830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.492696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.493062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.493425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.494718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.495024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.495039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.496585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.498102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.498838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.500278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.500533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.500549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.500562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.502675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.503040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.503902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.505179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.505431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.505446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.506969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.508155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.509603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.510929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.511180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.511195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.511208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.513611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.513972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.515628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.517189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.517441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.517457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.518987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.519693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.520964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.522482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.522736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.522752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.522765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.525143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.526399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.527677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.529173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.529425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.529441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.530300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.531715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.533218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.534729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.534978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.534993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.535007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.537849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.539132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.540651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.540702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.540951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.540966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.542082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.543627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.545042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.546589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.546841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.546857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.546871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.549663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.549713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.550985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.551027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.551277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.551291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.552831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.553922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.553965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.555559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.555812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.555828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.555842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.557914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.557970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.558340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.559329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.809 [2024-06-10 19:19:15.559633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:00.810 [2024-06-10 19:19:15.559651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.561218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.561269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.562782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.563507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.563764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.563780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.563794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.565348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.565717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.566077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.566122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.566519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.566536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.566588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.567624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.568922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.568977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.569236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.071 [2024-06-10 19:19:15.569257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.569271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.572263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.573784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.573838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.574199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.574640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.574657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.575025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.575385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.575428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.576546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.576834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.576850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.576864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.579672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.579720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.581240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.582836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.583312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.583328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.583705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.583750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.584104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.584465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.584820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.584837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.584850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.586245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.587402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.588677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.588723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.588970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.588985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.589037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.590561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.591126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.591170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.591591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.591608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.591623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.593674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.593723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.593763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.593801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.594557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.596826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.597191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.597207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.597221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.599920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.601961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.602000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.602405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.602422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.602441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.604917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.605166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.605181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.605195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.606744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.606784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.072 [2024-06-10 19:19:15.606825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.606862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.607844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.609843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.609886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.609928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.609969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.610681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.612764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.613150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.613166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.613180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.615904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.616148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.616164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.616177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.617711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.617758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.617803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.617840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.618591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.620906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.620949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.620991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.621739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.623841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.624083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.624098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.624112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.626969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.627041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.627087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.627469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.627485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.627499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.629681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.629723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.629765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.629804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.630172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.630189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.630244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.630282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.073 [2024-06-10 19:19:15.630339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.630378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.630746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.630762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.630776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.633960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.634312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.634328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.634342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.636476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.636519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.636557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.636600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.636956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.636973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.637653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.639842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.639896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.639960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.640671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.641074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.641091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.641105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.643926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.644327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.644344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.644358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.646492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.646551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.646604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.646643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.647611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.649835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.649877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.649915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.649952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.650901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.653785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.654199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.654215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.654230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.656984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.657367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.657383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.657400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.074 [2024-06-10 19:19:15.659467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.659510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.659548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.659591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.659975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.659991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.660604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.662859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.662919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.662969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.663609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.664020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.664035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.664049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.666890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.667305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.667321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.667335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.669422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.669468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.669510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.669876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.670832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.673076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.673447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.673500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.673862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.674284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.674300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.674355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.674397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.674764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.674809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.675217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.675234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.675248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.677354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.677723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.677768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.075 [2024-06-10 19:19:15.677808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.678201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.678218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.678269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.678636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.678682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.678735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.679147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.679163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.679177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.682053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.682119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.682169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.682546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.682994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.683011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.683378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.683429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.683469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.683830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.684191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.684207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.684222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.686377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.686420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.686780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.686820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.687159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.687175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.687229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.687269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.687633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.687678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.688054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.688070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.688084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.690275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.690648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.690707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.690750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.691196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.691212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.691259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.691625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.691671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.691709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.692105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.692121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.692136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.694573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.694626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.694664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.695020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.695370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.695386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.695760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.695807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.695862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.696224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.696631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.696648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.696661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.699371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.699741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.700104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.700696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.700945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.700963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.701476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.702738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.703095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.703454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.703868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.703886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.703901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.706375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.706746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.707107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.707471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.707805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.707823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.708193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.708553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.708917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.709286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.709662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.709680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.709695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.712385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.712761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.713123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.713480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.713882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.713900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.714273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.076 [2024-06-10 19:19:15.714640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.715015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.715378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.715811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.715830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.715845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.718350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.718863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.720140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.721653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.721901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.721918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.723507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.724566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.725839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.727355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.727610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.727627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.727641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.730082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.731475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.732998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.734520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.734772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.077 [2024-06-10 19:19:15.734789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:35:01.641 00:35:01.641 Latency(us) 00:35:01.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.641 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x0 length 0x100 00:35:01.641 crypto_ram : 5.95 43.06 2.69 0.00 0.00 2889302.02 253335.96 2442762.65 00:35:01.641 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x100 length 0x100 00:35:01.641 crypto_ram : 5.87 43.64 2.73 0.00 0.00 2837289.37 370776.47 2281701.38 00:35:01.641 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x0 length 0x100 00:35:01.641 crypto_ram1 : 5.95 43.05 2.69 0.00 0.00 2788887.76 253335.96 2254857.83 00:35:01.641 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x100 length 0x100 00:35:01.641 crypto_ram1 : 5.87 43.63 2.73 0.00 0.00 2739607.96 370776.47 2080374.78 00:35:01.641 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x0 length 0x100 00:35:01.641 crypto_ram2 : 5.62 278.64 17.41 0.00 0.00 409649.30 19818.09 607335.22 00:35:01.641 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x100 length 0x100 00:35:01.641 crypto_ram2 : 5.59 298.46 18.65 0.00 0.00 383856.85 2464.15 593913.45 00:35:01.641 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x0 length 0x100 00:35:01.641 crypto_ram3 : 5.72 290.66 18.17 0.00 0.00 384347.72 49492.79 340577.48 00:35:01.641 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:01.641 Verification LBA range: start 0x100 length 0x100 00:35:01.641 crypto_ram3 : 5.72 313.28 19.58 0.00 0.00 357630.11 58720.26 446273.95 00:35:01.641 =================================================================================================================== 00:35:01.642 Total : 1354.41 84.65 0.00 0.00 705750.36 2464.15 2442762.65 00:35:01.938 00:35:01.938 real 0m9.029s 00:35:01.938 user 0m17.151s 00:35:01.938 sys 0m0.447s 00:35:01.938 19:19:16 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:01.938 19:19:16 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:35:01.938 ************************************ 00:35:01.938 END TEST bdev_verify_big_io 00:35:01.938 ************************************ 00:35:01.938 19:19:16 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:01.938 19:19:16 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:35:01.938 19:19:16 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:01.938 19:19:16 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:02.214 ************************************ 00:35:02.214 START TEST bdev_write_zeroes 00:35:02.214 ************************************ 00:35:02.214 19:19:16 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:02.214 [2024-06-10 19:19:16.757295] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:02.214 [2024-06-10 19:19:16.757352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869503 ] 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:02.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:02.214 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:02.214 [2024-06-10 19:19:16.890610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.471 [2024-06-10 19:19:16.977908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.471 [2024-06-10 19:19:16.999112] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:35:02.472 [2024-06-10 19:19:17.007125] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:02.472 [2024-06-10 19:19:17.015139] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:02.472 [2024-06-10 19:19:17.119238] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:35:04.994 [2024-06-10 19:19:19.307453] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:35:04.994 [2024-06-10 19:19:19.307515] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:04.994 [2024-06-10 19:19:19.307530] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:04.994 [2024-06-10 19:19:19.315471] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:35:04.994 [2024-06-10 19:19:19.315495] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:04.994 [2024-06-10 19:19:19.315507] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:04.994 [2024-06-10 19:19:19.323492] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:35:04.994 [2024-06-10 19:19:19.323509] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:04.994 [2024-06-10 19:19:19.323521] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:04.994 [2024-06-10 19:19:19.331512] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:35:04.994 [2024-06-10 19:19:19.331529] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:04.994 [2024-06-10 19:19:19.331539] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:04.994 Running I/O for 1 seconds... 00:35:05.926 00:35:05.926 Latency(us) 00:35:05.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.926 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:05.926 crypto_ram : 1.02 2206.54 8.62 0.00 0.00 57652.68 5138.02 69625.45 00:35:05.926 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:05.926 crypto_ram1 : 1.02 2212.07 8.64 0.00 0.00 57205.29 5164.24 64592.28 00:35:05.926 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:05.926 crypto_ram2 : 1.02 17039.70 66.56 0.00 0.00 7408.17 2228.22 9751.76 00:35:05.926 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:05.926 crypto_ram3 : 1.02 17018.29 66.48 0.00 0.00 7387.57 2228.22 8336.18 00:35:05.926 =================================================================================================================== 00:35:05.926 Total : 38476.61 150.30 0.00 0.00 13165.54 2228.22 69625.45 00:35:06.183 00:35:06.183 real 0m4.087s 00:35:06.183 user 0m3.708s 00:35:06.183 sys 0m0.335s 00:35:06.183 19:19:20 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:06.183 19:19:20 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:35:06.183 ************************************ 00:35:06.183 END TEST bdev_write_zeroes 00:35:06.183 ************************************ 00:35:06.183 19:19:20 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:06.183 19:19:20 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:35:06.183 19:19:20 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:06.183 19:19:20 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:06.183 ************************************ 00:35:06.183 START TEST bdev_json_nonenclosed 00:35:06.183 ************************************ 00:35:06.183 19:19:20 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:06.183 [2024-06-10 19:19:20.928342] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:06.183 [2024-06-10 19:19:20.928401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870072 ] 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:06.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.442 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:06.442 [2024-06-10 19:19:21.061189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.442 [2024-06-10 19:19:21.144002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.442 [2024-06-10 19:19:21.144071] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:06.442 [2024-06-10 19:19:21.144090] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:06.442 [2024-06-10 19:19:21.144102] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:06.701 00:35:06.701 real 0m0.362s 00:35:06.701 user 0m0.205s 00:35:06.701 sys 0m0.154s 00:35:06.701 19:19:21 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:06.701 19:19:21 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:35:06.701 ************************************ 00:35:06.701 END TEST bdev_json_nonenclosed 00:35:06.701 ************************************ 00:35:06.701 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:06.701 19:19:21 blockdev_crypto_qat -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:35:06.701 19:19:21 blockdev_crypto_qat -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:06.701 19:19:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:06.701 ************************************ 00:35:06.701 START TEST bdev_json_nonarray 00:35:06.701 ************************************ 00:35:06.701 19:19:21 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:06.701 [2024-06-10 19:19:21.374980] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:06.701 [2024-06-10 19:19:21.375038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870272 ] 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:06.701 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:06.701 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:06.959 [2024-06-10 19:19:21.506764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.959 [2024-06-10 19:19:21.591030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.959 [2024-06-10 19:19:21.591103] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:06.959 [2024-06-10 19:19:21.591122] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:06.959 [2024-06-10 19:19:21.591135] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:06.959 00:35:06.959 real 0m0.360s 00:35:06.959 user 0m0.206s 00:35:06.959 sys 0m0.151s 00:35:06.959 19:19:21 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:06.959 19:19:21 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:35:06.959 ************************************ 00:35:06.959 END TEST bdev_json_nonarray 00:35:06.959 ************************************ 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:35:07.217 19:19:21 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:35:07.217 00:35:07.217 real 1m10.665s 00:35:07.217 user 2m54.209s 00:35:07.217 sys 0m8.619s 00:35:07.217 19:19:21 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:07.217 19:19:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:35:07.217 ************************************ 00:35:07.217 END TEST blockdev_crypto_qat 00:35:07.217 ************************************ 00:35:07.217 19:19:21 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:35:07.217 19:19:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:07.217 19:19:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:07.217 19:19:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.217 ************************************ 00:35:07.217 START TEST chaining 00:35:07.217 ************************************ 00:35:07.217 19:19:21 chaining -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:35:07.217 * Looking for test storage... 00:35:07.217 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:35:07.217 19:19:21 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@7 -- # uname -s 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.217 19:19:21 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:35:07.217 19:19:21 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.217 19:19:21 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.217 19:19:21 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.218 19:19:21 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.218 19:19:21 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.218 19:19:21 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.218 19:19:21 chaining -- paths/export.sh@5 -- # export PATH 00:35:07.218 19:19:21 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@47 -- # : 0 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:07.218 19:19:21 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:35:07.218 19:19:21 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:35:07.218 19:19:21 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:35:07.218 19:19:21 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:35:07.218 19:19:21 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:35:07.218 19:19:21 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.218 19:19:21 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.218 19:19:21 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:07.218 19:19:21 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:35:07.218 19:19:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@296 -- # e810=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@297 -- # x722=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@298 -- # mlx=() 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:17.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:17.188 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:17.188 Found net devices under 0000:af:00.0: cvl_0_0 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:17.188 Found net devices under 0000:af:00.1: cvl_0_1 00:35:17.188 19:19:30 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.189 19:19:30 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:17.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:35:17.189 00:35:17.189 --- 10.0.0.2 ping statistics --- 00:35:17.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.189 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:35:17.189 00:35:17.189 --- 10.0.0.1 ping statistics --- 00:35:17.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.189 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@422 -- # return 0 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:17.189 19:19:31 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@481 -- # nvmfpid=1874522 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:17.189 19:19:31 chaining -- nvmf/common.sh@482 -- # waitforlisten 1874522 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@830 -- # '[' -z 1874522 ']' 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:17.189 19:19:31 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.189 [2024-06-10 19:19:31.123365] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:17.189 [2024-06-10 19:19:31.123411] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:17.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.189 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:17.189 [2024-06-10 19:19:31.240824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.189 [2024-06-10 19:19:31.327847] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.189 [2024-06-10 19:19:31.327892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.189 [2024-06-10 19:19:31.327906] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.189 [2024-06-10 19:19:31.327919] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.189 [2024-06-10 19:19:31.327930] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.189 [2024-06-10 19:19:31.327959] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.448 19:19:31 chaining -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:17.448 19:19:31 chaining -- common/autotest_common.sh@863 -- # return 0 00:35:17.448 19:19:31 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:17.448 19:19:31 chaining -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:17.448 19:19:31 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.448 19:19:32 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@69 -- # mktemp 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.7E7AIIinds 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@69 -- # mktemp 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.XiKKlvd7qw 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.448 malloc0 00:35:17.448 true 00:35:17.448 true 00:35:17.448 [2024-06-10 19:19:32.064345] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:17.448 crypto0 00:35:17.448 [2024-06-10 19:19:32.072374] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:35:17.448 crypto1 00:35:17.448 [2024-06-10 19:19:32.080474] tcp.c: 724:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.448 [2024-06-10 19:19:32.096708] tcp.c:1053:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@85 -- # update_stats 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.448 19:19:32 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.448 19:19:32 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.449 19:19:32 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:17.707 19:19:32 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.707 19:19:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.707 19:19:32 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:17.707 19:19:32 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.707 19:19:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:17.707 19:19:32 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.7E7AIIinds bs=1K count=64 00:35:17.707 64+0 records in 00:35:17.707 64+0 records out 00:35:17.707 65536 bytes (66 kB, 64 KiB) copied, 0.00106525 s, 61.5 MB/s 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.7E7AIIinds --ob Nvme0n1 --bs 65536 --count 1 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@25 -- # local config 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:17.707 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:17.707 "subsystems": [ 00:35:17.707 { 00:35:17.707 "subsystem": "bdev", 00:35:17.707 "config": [ 00:35:17.707 { 00:35:17.707 "method": "bdev_nvme_attach_controller", 00:35:17.707 "params": { 00:35:17.707 "trtype": "tcp", 00:35:17.707 "adrfam": "IPv4", 00:35:17.707 "name": "Nvme0", 00:35:17.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.707 "traddr": "10.0.0.2", 00:35:17.707 "trsvcid": "4420" 00:35:17.707 } 00:35:17.707 }, 00:35:17.707 { 00:35:17.707 "method": "bdev_set_options", 00:35:17.707 "params": { 00:35:17.707 "bdev_auto_examine": false 00:35:17.707 } 00:35:17.707 } 00:35:17.707 ] 00:35:17.707 } 00:35:17.707 ] 00:35:17.707 }' 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.7E7AIIinds --ob Nvme0n1 --bs 65536 --count 1 00:35:17.707 19:19:32 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:17.707 "subsystems": [ 00:35:17.707 { 00:35:17.707 "subsystem": "bdev", 00:35:17.707 "config": [ 00:35:17.707 { 00:35:17.707 "method": "bdev_nvme_attach_controller", 00:35:17.707 "params": { 00:35:17.707 "trtype": "tcp", 00:35:17.707 "adrfam": "IPv4", 00:35:17.707 "name": "Nvme0", 00:35:17.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.707 "traddr": "10.0.0.2", 00:35:17.707 "trsvcid": "4420" 00:35:17.707 } 00:35:17.707 }, 00:35:17.707 { 00:35:17.707 "method": "bdev_set_options", 00:35:17.707 "params": { 00:35:17.707 "bdev_auto_examine": false 00:35:17.707 } 00:35:17.707 } 00:35:17.707 ] 00:35:17.707 } 00:35:17.707 ] 00:35:17.707 }' 00:35:17.707 [2024-06-10 19:19:32.410187] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:17.707 [2024-06-10 19:19:32.410244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874713 ] 00:35:17.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.965 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:17.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.965 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:17.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:17.966 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:17.966 [2024-06-10 19:19:32.543801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.966 [2024-06-10 19:19:32.627372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.483  Copying: 64/64 [kB] (average 9142 kBps) 00:35:18.483 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@96 -- # update_stats 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:18.483 19:19:33 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.483 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:18.741 19:19:33 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.XiKKlvd7qw --ib Nvme0n1 --bs 65536 --count 1 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@25 -- # local config 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:18.741 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:18.741 19:19:33 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:18.741 "subsystems": [ 00:35:18.741 { 00:35:18.741 "subsystem": "bdev", 00:35:18.741 "config": [ 00:35:18.741 { 00:35:18.741 "method": "bdev_nvme_attach_controller", 00:35:18.741 "params": { 00:35:18.741 "trtype": "tcp", 00:35:18.741 "adrfam": "IPv4", 00:35:18.742 "name": "Nvme0", 00:35:18.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.742 "traddr": "10.0.0.2", 00:35:18.742 "trsvcid": "4420" 00:35:18.742 } 00:35:18.742 }, 00:35:18.742 { 00:35:18.742 "method": "bdev_set_options", 00:35:18.742 "params": { 00:35:18.742 "bdev_auto_examine": false 00:35:18.742 } 00:35:18.742 } 00:35:18.742 ] 00:35:18.742 } 00:35:18.742 ] 00:35:18.742 }' 00:35:18.742 19:19:33 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.XiKKlvd7qw --ib Nvme0n1 --bs 65536 --count 1 00:35:18.742 19:19:33 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:18.742 "subsystems": [ 00:35:18.742 { 00:35:18.742 "subsystem": "bdev", 00:35:18.742 "config": [ 00:35:18.742 { 00:35:18.742 "method": "bdev_nvme_attach_controller", 00:35:18.742 "params": { 00:35:18.742 "trtype": "tcp", 00:35:18.742 "adrfam": "IPv4", 00:35:18.742 "name": "Nvme0", 00:35:18.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.742 "traddr": "10.0.0.2", 00:35:18.742 "trsvcid": "4420" 00:35:18.742 } 00:35:18.742 }, 00:35:18.742 { 00:35:18.742 "method": "bdev_set_options", 00:35:18.742 "params": { 00:35:18.742 "bdev_auto_examine": false 00:35:18.742 } 00:35:18.742 } 00:35:18.742 ] 00:35:18.742 } 00:35:18.742 ] 00:35:18.742 }' 00:35:19.000 [2024-06-10 19:19:33.515482] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:19.000 [2024-06-10 19:19:33.515541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875019 ] 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.000 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:19.000 [2024-06-10 19:19:33.646985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.000 [2024-06-10 19:19:33.730598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.517  Copying: 64/64 [kB] (average 10 MBps) 00:35:19.517 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:19.517 19:19:34 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.517 19:19:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:19.776 19:19:34 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:19.776 19:19:34 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.776 19:19:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:19.776 19:19:34 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.7E7AIIinds /tmp/tmp.XiKKlvd7qw 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@25 -- # local config 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:19.776 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:19.776 "subsystems": [ 00:35:19.776 { 00:35:19.776 "subsystem": "bdev", 00:35:19.776 "config": [ 00:35:19.776 { 00:35:19.776 "method": "bdev_nvme_attach_controller", 00:35:19.776 "params": { 00:35:19.776 "trtype": "tcp", 00:35:19.776 "adrfam": "IPv4", 00:35:19.776 "name": "Nvme0", 00:35:19.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.776 "traddr": "10.0.0.2", 00:35:19.776 "trsvcid": "4420" 00:35:19.776 } 00:35:19.776 }, 00:35:19.776 { 00:35:19.776 "method": "bdev_set_options", 00:35:19.776 "params": { 00:35:19.776 "bdev_auto_examine": false 00:35:19.776 } 00:35:19.776 } 00:35:19.776 ] 00:35:19.776 } 00:35:19.776 ] 00:35:19.776 }' 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:35:19.776 19:19:34 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:19.776 "subsystems": [ 00:35:19.776 { 00:35:19.776 "subsystem": "bdev", 00:35:19.776 "config": [ 00:35:19.776 { 00:35:19.776 "method": "bdev_nvme_attach_controller", 00:35:19.776 "params": { 00:35:19.776 "trtype": "tcp", 00:35:19.776 "adrfam": "IPv4", 00:35:19.776 "name": "Nvme0", 00:35:19.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.776 "traddr": "10.0.0.2", 00:35:19.776 "trsvcid": "4420" 00:35:19.776 } 00:35:19.776 }, 00:35:19.776 { 00:35:19.776 "method": "bdev_set_options", 00:35:19.776 "params": { 00:35:19.776 "bdev_auto_examine": false 00:35:19.776 } 00:35:19.776 } 00:35:19.776 ] 00:35:19.776 } 00:35:19.776 ] 00:35:19.776 }' 00:35:19.776 [2024-06-10 19:19:34.452014] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:19.776 [2024-06-10 19:19:34.452074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875212 ] 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:19.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:19.776 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:20.035 [2024-06-10 19:19:34.587113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.035 [2024-06-10 19:19:34.671872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.551  Copying: 64/64 [kB] (average 12 MBps) 00:35:20.551 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@106 -- # update_stats 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:20.551 19:19:35 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.7E7AIIinds --ob Nvme0n1 --bs 4096 --count 16 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@25 -- # local config 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:20.551 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:20.551 19:19:35 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:20.809 19:19:35 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:20.809 "subsystems": [ 00:35:20.810 { 00:35:20.810 "subsystem": "bdev", 00:35:20.810 "config": [ 00:35:20.810 { 00:35:20.810 "method": "bdev_nvme_attach_controller", 00:35:20.810 "params": { 00:35:20.810 "trtype": "tcp", 00:35:20.810 "adrfam": "IPv4", 00:35:20.810 "name": "Nvme0", 00:35:20.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.810 "traddr": "10.0.0.2", 00:35:20.810 "trsvcid": "4420" 00:35:20.810 } 00:35:20.810 }, 00:35:20.810 { 00:35:20.810 "method": "bdev_set_options", 00:35:20.810 "params": { 00:35:20.810 "bdev_auto_examine": false 00:35:20.810 } 00:35:20.810 } 00:35:20.810 ] 00:35:20.810 } 00:35:20.810 ] 00:35:20.810 }' 00:35:20.810 19:19:35 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.7E7AIIinds --ob Nvme0n1 --bs 4096 --count 16 00:35:20.810 19:19:35 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:20.810 "subsystems": [ 00:35:20.810 { 00:35:20.810 "subsystem": "bdev", 00:35:20.810 "config": [ 00:35:20.810 { 00:35:20.810 "method": "bdev_nvme_attach_controller", 00:35:20.810 "params": { 00:35:20.810 "trtype": "tcp", 00:35:20.810 "adrfam": "IPv4", 00:35:20.810 "name": "Nvme0", 00:35:20.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.810 "traddr": "10.0.0.2", 00:35:20.810 "trsvcid": "4420" 00:35:20.810 } 00:35:20.810 }, 00:35:20.810 { 00:35:20.810 "method": "bdev_set_options", 00:35:20.810 "params": { 00:35:20.810 "bdev_auto_examine": false 00:35:20.810 } 00:35:20.810 } 00:35:20.810 ] 00:35:20.810 } 00:35:20.810 ] 00:35:20.810 }' 00:35:20.810 [2024-06-10 19:19:35.388732] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:20.810 [2024-06-10 19:19:35.388790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875335 ] 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:20.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:20.810 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:20.810 [2024-06-10 19:19:35.521066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.068 [2024-06-10 19:19:35.602961] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.585  Copying: 64/64 [kB] (average 10 MBps) 00:35:21.585 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@114 -- # update_stats 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:21.585 19:19:36 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.585 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:21.844 19:19:36 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@117 -- # : 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.XiKKlvd7qw --ib Nvme0n1 --bs 4096 --count 16 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@25 -- # local config 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:35:21.844 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:35:21.844 19:19:36 chaining -- bdev/chaining.sh@31 -- # config='{ 00:35:21.844 "subsystems": [ 00:35:21.844 { 00:35:21.844 "subsystem": "bdev", 00:35:21.844 "config": [ 00:35:21.844 { 00:35:21.844 "method": "bdev_nvme_attach_controller", 00:35:21.844 "params": { 00:35:21.844 "trtype": "tcp", 00:35:21.844 "adrfam": "IPv4", 00:35:21.844 "name": "Nvme0", 00:35:21.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.844 "traddr": "10.0.0.2", 00:35:21.845 "trsvcid": "4420" 00:35:21.845 } 00:35:21.845 }, 00:35:21.845 { 00:35:21.845 "method": "bdev_set_options", 00:35:21.845 "params": { 00:35:21.845 "bdev_auto_examine": false 00:35:21.845 } 00:35:21.845 } 00:35:21.845 ] 00:35:21.845 } 00:35:21.845 ] 00:35:21.845 }' 00:35:21.845 19:19:36 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.XiKKlvd7qw --ib Nvme0n1 --bs 4096 --count 16 00:35:21.845 19:19:36 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:35:21.845 "subsystems": [ 00:35:21.845 { 00:35:21.845 "subsystem": "bdev", 00:35:21.845 "config": [ 00:35:21.845 { 00:35:21.845 "method": "bdev_nvme_attach_controller", 00:35:21.845 "params": { 00:35:21.845 "trtype": "tcp", 00:35:21.845 "adrfam": "IPv4", 00:35:21.845 "name": "Nvme0", 00:35:21.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.845 "traddr": "10.0.0.2", 00:35:21.845 "trsvcid": "4420" 00:35:21.845 } 00:35:21.845 }, 00:35:21.845 { 00:35:21.845 "method": "bdev_set_options", 00:35:21.845 "params": { 00:35:21.845 "bdev_auto_examine": false 00:35:21.845 } 00:35:21.845 } 00:35:21.845 ] 00:35:21.845 } 00:35:21.845 ] 00:35:21.845 }' 00:35:22.103 [2024-06-10 19:19:36.625761] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:22.103 [2024-06-10 19:19:36.625822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875631 ] 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.103 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:22.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:22.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:22.104 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:22.104 [2024-06-10 19:19:36.759124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.104 [2024-06-10 19:19:36.841267] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.670  Copying: 64/64 [kB] (average 484 kBps) 00:35:22.670 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:35:22.670 19:19:37 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:22.670 19:19:37 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.670 19:19:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:22.670 19:19:37 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.7E7AIIinds /tmp/tmp.XiKKlvd7qw 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.7E7AIIinds /tmp/tmp.XiKKlvd7qw 00:35:22.929 19:19:37 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@117 -- # sync 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@120 -- # set +e 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:22.929 rmmod nvme_tcp 00:35:22.929 rmmod nvme_fabrics 00:35:22.929 rmmod nvme_keyring 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@124 -- # set -e 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@125 -- # return 0 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@489 -- # '[' -n 1874522 ']' 00:35:22.929 19:19:37 chaining -- nvmf/common.sh@490 -- # killprocess 1874522 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@949 -- # '[' -z 1874522 ']' 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@953 -- # kill -0 1874522 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@954 -- # uname 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:22.929 19:19:37 chaining -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1874522 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1874522' 00:35:23.187 killing process with pid 1874522 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@968 -- # kill 1874522 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@973 -- # wait 1874522 00:35:23.187 19:19:37 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:23.187 19:19:37 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:23.187 19:19:37 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:23.187 19:19:37 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:23.187 19:19:37 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:23.187 19:19:37 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:23.187 19:19:37 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.719 19:19:40 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:25.719 19:19:40 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:35:25.719 19:19:40 chaining -- bdev/chaining.sh@132 -- # bperfpid=1876220 00:35:25.719 19:19:40 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:35:25.719 19:19:40 chaining -- bdev/chaining.sh@134 -- # waitforlisten 1876220 00:35:25.719 19:19:40 chaining -- common/autotest_common.sh@830 -- # '[' -z 1876220 ']' 00:35:25.719 19:19:40 chaining -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.719 19:19:40 chaining -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:25.719 19:19:40 chaining -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.719 19:19:40 chaining -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:25.719 19:19:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:25.719 [2024-06-10 19:19:40.078080] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:25.720 [2024-06-10 19:19:40.078142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876220 ] 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:25.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:25.720 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:25.720 [2024-06-10 19:19:40.209189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.720 [2024-06-10 19:19:40.292061] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.284 19:19:40 chaining -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:26.284 19:19:40 chaining -- common/autotest_common.sh@863 -- # return 0 00:35:26.284 19:19:40 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:35:26.284 19:19:40 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.284 19:19:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:26.542 malloc0 00:35:26.542 true 00:35:26.542 true 00:35:26.542 [2024-06-10 19:19:41.120538] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:26.542 crypto0 00:35:26.542 [2024-06-10 19:19:41.128563] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:35:26.542 crypto1 00:35:26.542 19:19:41 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.542 19:19:41 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:26.542 Running I/O for 5 seconds... 00:35:31.806 00:35:31.806 Latency(us) 00:35:31.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.806 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:35:31.806 Verification LBA range: start 0x0 length 0x2000 00:35:31.806 crypto1 : 5.01 12493.29 48.80 0.00 0.00 20429.71 3932.16 13212.06 00:35:31.806 =================================================================================================================== 00:35:31.806 Total : 12493.29 48.80 0.00 0.00 20429.71 3932.16 13212.06 00:35:31.806 0 00:35:31.806 19:19:46 chaining -- bdev/chaining.sh@146 -- # killprocess 1876220 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@949 -- # '[' -z 1876220 ']' 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@953 -- # kill -0 1876220 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@954 -- # uname 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1876220 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1876220' 00:35:31.806 killing process with pid 1876220 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@968 -- # kill 1876220 00:35:31.806 Received shutdown signal, test time was about 5.000000 seconds 00:35:31.806 00:35:31.806 Latency(us) 00:35:31.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.806 =================================================================================================================== 00:35:31.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@973 -- # wait 1876220 00:35:31.806 19:19:46 chaining -- bdev/chaining.sh@152 -- # bperfpid=1877292 00:35:31.806 19:19:46 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:35:31.806 19:19:46 chaining -- bdev/chaining.sh@154 -- # waitforlisten 1877292 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@830 -- # '[' -z 1877292 ']' 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.806 19:19:46 chaining -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:31.807 19:19:46 chaining -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.807 19:19:46 chaining -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:31.807 19:19:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:32.065 [2024-06-10 19:19:46.579505] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:32.065 [2024-06-10 19:19:46.579568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877292 ] 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:32.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.065 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:32.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.066 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:32.066 [2024-06-10 19:19:46.710449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.066 [2024-06-10 19:19:46.796952] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.000 19:19:47 chaining -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:33.000 19:19:47 chaining -- common/autotest_common.sh@863 -- # return 0 00:35:33.000 19:19:47 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:35:33.000 19:19:47 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.000 19:19:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:33.000 malloc0 00:35:33.000 true 00:35:33.000 true 00:35:33.000 [2024-06-10 19:19:47.611359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:35:33.000 [2024-06-10 19:19:47.611406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.000 [2024-06-10 19:19:47.611424] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x101cc70 00:35:33.000 [2024-06-10 19:19:47.611436] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.000 [2024-06-10 19:19:47.612435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.000 [2024-06-10 19:19:47.612459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:35:33.000 pt0 00:35:33.000 [2024-06-10 19:19:47.619388] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:33.000 crypto0 00:35:33.000 [2024-06-10 19:19:47.627407] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:35:33.000 crypto1 00:35:33.000 19:19:47 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.000 19:19:47 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:33.000 Running I/O for 5 seconds... 00:35:38.265 00:35:38.265 Latency(us) 00:35:38.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.265 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:35:38.265 Verification LBA range: start 0x0 length 0x2000 00:35:38.265 crypto1 : 5.01 9920.77 38.75 0.00 0.00 25728.46 2070.94 15938.36 00:35:38.265 =================================================================================================================== 00:35:38.265 Total : 9920.77 38.75 0.00 0.00 25728.46 2070.94 15938.36 00:35:38.265 0 00:35:38.265 19:19:52 chaining -- bdev/chaining.sh@167 -- # killprocess 1877292 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@949 -- # '[' -z 1877292 ']' 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@953 -- # kill -0 1877292 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@954 -- # uname 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1877292 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1877292' 00:35:38.265 killing process with pid 1877292 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@968 -- # kill 1877292 00:35:38.265 Received shutdown signal, test time was about 5.000000 seconds 00:35:38.265 00:35:38.265 Latency(us) 00:35:38.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.265 =================================================================================================================== 00:35:38.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:38.265 19:19:52 chaining -- common/autotest_common.sh@973 -- # wait 1877292 00:35:38.524 19:19:53 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:35:38.524 19:19:53 chaining -- bdev/chaining.sh@170 -- # killprocess 1877292 00:35:38.524 19:19:53 chaining -- common/autotest_common.sh@949 -- # '[' -z 1877292 ']' 00:35:38.524 19:19:53 chaining -- common/autotest_common.sh@953 -- # kill -0 1877292 00:35:38.524 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1877292) - No such process 00:35:38.524 19:19:53 chaining -- common/autotest_common.sh@976 -- # echo 'Process with pid 1877292 is not found' 00:35:38.524 Process with pid 1877292 is not found 00:35:38.524 19:19:53 chaining -- bdev/chaining.sh@171 -- # wait 1877292 00:35:38.524 19:19:53 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.524 19:19:53 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:38.524 19:19:53 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:35:38.524 19:19:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@296 -- # e810=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@297 -- # x722=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@298 -- # mlx=() 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:38.524 19:19:53 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:38.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:38.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:38.525 Found net devices under 0000:af:00.0: cvl_0_0 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:38.525 Found net devices under 0000:af:00.1: cvl_0_1 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:38.525 19:19:53 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:38.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:35:38.784 00:35:38.784 --- 10.0.0.2 ping statistics --- 00:35:38.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.784 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:35:38.784 00:35:38.784 --- 10.0.0.1 ping statistics --- 00:35:38.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.784 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@422 -- # return 0 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:38.784 19:19:53 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@481 -- # nvmfpid=1878390 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:38.784 19:19:53 chaining -- nvmf/common.sh@482 -- # waitforlisten 1878390 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@830 -- # '[' -z 1878390 ']' 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:38.784 19:19:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:38.784 [2024-06-10 19:19:53.452459] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:38.784 [2024-06-10 19:19:53.452520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:38.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:38.784 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:39.042 [2024-06-10 19:19:53.584157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.042 [2024-06-10 19:19:53.668295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.042 [2024-06-10 19:19:53.668344] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.042 [2024-06-10 19:19:53.668357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.042 [2024-06-10 19:19:53.668369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.042 [2024-06-10 19:19:53.668379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.042 [2024-06-10 19:19:53.668406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.659 19:19:54 chaining -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:39.659 19:19:54 chaining -- common/autotest_common.sh@863 -- # return 0 00:35:39.659 19:19:54 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:39.659 19:19:54 chaining -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:39.659 19:19:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:39.659 19:19:54 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.659 19:19:54 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:35:39.659 19:19:54 chaining -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.659 19:19:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:39.659 malloc0 00:35:39.932 [2024-06-10 19:19:54.402369] tcp.c: 724:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.932 [2024-06-10 19:19:54.418587] tcp.c:1053:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.932 19:19:54 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:35:39.932 19:19:54 chaining -- bdev/chaining.sh@189 -- # bperfpid=1878668 00:35:39.932 19:19:54 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:35:39.932 19:19:54 chaining -- bdev/chaining.sh@191 -- # waitforlisten 1878668 /var/tmp/bperf.sock 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@830 -- # '[' -z 1878668 ']' 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:39.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:39.932 19:19:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:39.932 [2024-06-10 19:19:54.488733] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:39.932 [2024-06-10 19:19:54.488789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878668 ] 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.932 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:39.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.933 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:39.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.933 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:39.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.933 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:39.933 [2024-06-10 19:19:54.623655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.191 [2024-06-10 19:19:54.714388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.757 19:19:55 chaining -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:40.757 19:19:55 chaining -- common/autotest_common.sh@863 -- # return 0 00:35:40.757 19:19:55 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:35:40.757 19:19:55 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:35:41.015 [2024-06-10 19:19:55.729278] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:41.015 nvme0n1 00:35:41.015 true 00:35:41.015 crypto0 00:35:41.015 19:19:55 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:41.273 Running I/O for 5 seconds... 00:35:46.538 00:35:46.538 Latency(us) 00:35:46.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.538 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:35:46.538 Verification LBA range: start 0x0 length 0x2000 00:35:46.538 crypto0 : 5.02 9401.88 36.73 0.00 0.00 27146.43 2988.44 24431.82 00:35:46.538 =================================================================================================================== 00:35:46.538 Total : 9401.88 36.73 0.00 0.00 27146.43 2988.44 24431.82 00:35:46.538 0 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:46.538 19:20:00 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@205 -- # sequence=94346 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:46.538 19:20:01 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@206 -- # encrypt=47173 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:46.796 19:20:01 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@207 -- # decrypt=47173 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:35:47.054 19:20:01 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:47.312 19:20:01 chaining -- bdev/chaining.sh@208 -- # crc32c=94346 00:35:47.312 19:20:01 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:35:47.312 19:20:01 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:35:47.313 19:20:01 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:35:47.313 19:20:01 chaining -- bdev/chaining.sh@214 -- # killprocess 1878668 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@949 -- # '[' -z 1878668 ']' 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@953 -- # kill -0 1878668 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@954 -- # uname 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1878668 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1878668' 00:35:47.313 killing process with pid 1878668 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@968 -- # kill 1878668 00:35:47.313 Received shutdown signal, test time was about 5.000000 seconds 00:35:47.313 00:35:47.313 Latency(us) 00:35:47.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.313 =================================================================================================================== 00:35:47.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:47.313 19:20:01 chaining -- common/autotest_common.sh@973 -- # wait 1878668 00:35:47.571 19:20:02 chaining -- bdev/chaining.sh@219 -- # bperfpid=1879952 00:35:47.571 19:20:02 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:35:47.571 19:20:02 chaining -- bdev/chaining.sh@221 -- # waitforlisten 1879952 /var/tmp/bperf.sock 00:35:47.571 19:20:02 chaining -- common/autotest_common.sh@830 -- # '[' -z 1879952 ']' 00:35:47.571 19:20:02 chaining -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.571 19:20:02 chaining -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:47.571 19:20:02 chaining -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.571 19:20:02 chaining -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:47.571 19:20:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:47.571 [2024-06-10 19:20:02.149125] Starting SPDK v24.09-pre git sha1 5456a66b7 / DPDK 24.03.0 initialization... 00:35:47.571 [2024-06-10 19:20:02.149189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879952 ] 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.0 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.1 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.2 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.3 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.4 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.5 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.6 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:01.7 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.0 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.1 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.2 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.3 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.4 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.5 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.6 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b6:02.7 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.0 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.1 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.2 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.3 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.4 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.5 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.6 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:01.7 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:02.0 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:02.1 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:02.2 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:02.3 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:02.4 cannot be used 00:35:47.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.571 EAL: Requested device 0000:b8:02.5 cannot be used 00:35:47.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.572 EAL: Requested device 0000:b8:02.6 cannot be used 00:35:47.572 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:47.572 EAL: Requested device 0000:b8:02.7 cannot be used 00:35:47.572 [2024-06-10 19:20:02.281419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.829 [2024-06-10 19:20:02.367785] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.394 19:20:02 chaining -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:48.394 19:20:02 chaining -- common/autotest_common.sh@863 -- # return 0 00:35:48.395 19:20:02 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:35:48.395 19:20:02 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:35:48.652 [2024-06-10 19:20:03.359861] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:35:48.652 nvme0n1 00:35:48.652 true 00:35:48.652 crypto0 00:35:48.652 19:20:03 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.910 Running I/O for 5 seconds... 00:35:54.174 00:35:54.174 Latency(us) 00:35:54.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.174 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:35:54.174 Verification LBA range: start 0x0 length 0x200 00:35:54.174 crypto0 : 5.01 1745.27 109.08 0.00 0.00 17986.54 484.97 26633.83 00:35:54.174 =================================================================================================================== 00:35:54.174 Total : 1745.27 109.08 0.00 0.00 17986.54 484.97 26633.83 00:35:54.174 0 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@233 -- # sequence=17478 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:54.174 19:20:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:54.175 19:20:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:54.175 19:20:08 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:54.175 19:20:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:54.175 19:20:08 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@234 -- # encrypt=8739 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:54.433 19:20:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@235 -- # decrypt=8739 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:35:54.691 19:20:09 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:54.949 19:20:09 chaining -- bdev/chaining.sh@236 -- # crc32c=17478 00:35:54.949 19:20:09 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:35:54.949 19:20:09 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:35:54.949 19:20:09 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:35:54.949 19:20:09 chaining -- bdev/chaining.sh@242 -- # killprocess 1879952 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@949 -- # '[' -z 1879952 ']' 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@953 -- # kill -0 1879952 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@954 -- # uname 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1879952 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1879952' 00:35:54.949 killing process with pid 1879952 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@968 -- # kill 1879952 00:35:54.949 Received shutdown signal, test time was about 5.000000 seconds 00:35:54.949 00:35:54.949 Latency(us) 00:35:54.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.949 =================================================================================================================== 00:35:54.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.949 19:20:09 chaining -- common/autotest_common.sh@973 -- # wait 1879952 00:35:55.207 19:20:09 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@117 -- # sync 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@120 -- # set +e 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:55.207 rmmod nvme_tcp 00:35:55.207 rmmod nvme_fabrics 00:35:55.207 rmmod nvme_keyring 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@124 -- # set -e 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@125 -- # return 0 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@489 -- # '[' -n 1878390 ']' 00:35:55.207 19:20:09 chaining -- nvmf/common.sh@490 -- # killprocess 1878390 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@949 -- # '[' -z 1878390 ']' 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@953 -- # kill -0 1878390 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@954 -- # uname 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1878390 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1878390' 00:35:55.207 killing process with pid 1878390 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@968 -- # kill 1878390 00:35:55.207 19:20:09 chaining -- common/autotest_common.sh@973 -- # wait 1878390 00:35:55.465 19:20:10 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:55.465 19:20:10 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:55.465 19:20:10 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:55.465 19:20:10 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:55.465 19:20:10 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:55.465 19:20:10 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.465 19:20:10 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:55.465 19:20:10 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.999 19:20:12 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:57.999 19:20:12 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:35:57.999 00:35:57.999 real 0m50.340s 00:35:57.999 user 0m59.217s 00:35:57.999 sys 0m13.984s 00:35:57.999 19:20:12 chaining -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:57.999 19:20:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:57.999 ************************************ 00:35:57.999 END TEST chaining 00:35:57.999 ************************************ 00:35:57.999 19:20:12 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:35:57.999 19:20:12 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:57.999 19:20:12 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:57.999 19:20:12 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:57.999 19:20:12 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:35:57.999 19:20:12 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:35:57.999 19:20:12 -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:57.999 19:20:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.999 19:20:12 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:35:57.999 19:20:12 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:35:57.999 19:20:12 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:35:57.999 19:20:12 -- common/autotest_common.sh@10 -- # set +x 00:36:04.559 INFO: APP EXITING 00:36:04.559 INFO: killing all VMs 00:36:04.559 INFO: killing vhost app 00:36:04.559 WARN: no vhost pid file found 00:36:04.559 INFO: EXIT DONE 00:36:07.839 Waiting for block devices as requested 00:36:07.839 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:07.839 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:08.096 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:08.097 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:08.097 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:08.354 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:08.354 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:08.354 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:08.612 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:08.612 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:08.612 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:08.870 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:08.870 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:08.870 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:09.129 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:09.130 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:09.130 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:13.318 Cleaning 00:36:13.318 Removing: /var/run/dpdk/spdk0/config 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:13.318 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:13.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:13.577 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:13.577 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:13.577 Removing: /dev/shm/nvmf_trace.0 00:36:13.577 Removing: /dev/shm/spdk_tgt_trace.pid1567625 00:36:13.577 Removing: /var/run/dpdk/spdk0 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1562606 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1566158 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1567625 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1568328 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1569165 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1569445 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1570552 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1570677 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1570940 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1574301 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1576336 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1576730 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1577174 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1577517 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1577844 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1578127 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1578417 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1578730 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1579577 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1582896 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1583111 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1583425 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1583701 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1583959 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1584029 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1584347 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1584725 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1585006 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1585602 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1586007 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1586298 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1586578 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1586868 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1587153 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1587441 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1587720 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1588013 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1588295 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1588582 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1588863 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1589154 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1589436 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1589729 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1590013 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1590295 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1590588 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1591031 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1591435 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1591735 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1592267 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1592569 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1593023 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1593402 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1593579 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1594062 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1594463 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1595007 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1595050 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1599934 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1602227 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1604357 00:36:13.577 Removing: /var/run/dpdk/spdk_pid1605562 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1606941 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1607244 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1607392 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1607537 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1612448 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1613021 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1614359 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1614644 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1621592 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1623671 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1624720 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1629679 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1631755 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1632879 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1637749 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1640629 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1641701 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1653116 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1656335 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1657524 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1668976 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1671666 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1672829 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1684264 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1688184 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1689913 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1702827 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1705820 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1707048 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1720179 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1723169 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1724972 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1737818 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1742347 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1743727 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1744960 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1748774 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1754989 00:36:13.835 Removing: /var/run/dpdk/spdk_pid1758548 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1764251 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1768489 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1775079 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1778277 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1785905 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1788624 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1796541 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1799292 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1806894 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1809619 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1814630 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1815157 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1815657 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1815989 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1816589 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1817456 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1818402 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1819013 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1821140 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1823255 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1825869 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1827725 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1829806 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1831832 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1833940 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1835702 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1836430 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1836976 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1839478 00:36:13.836 Removing: /var/run/dpdk/spdk_pid1841894 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1844202 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1845611 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1847112 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1847838 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1847942 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1848012 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1848310 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1848578 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1849852 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1851857 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1853855 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1854930 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1855983 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1856341 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1856438 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1856554 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1858026 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1858833 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1859378 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1861763 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1864286 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1866564 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1867903 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1869503 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1870072 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1870272 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1874713 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1875019 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1875212 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1875335 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1875631 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1876220 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1877292 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1878668 00:36:14.094 Removing: /var/run/dpdk/spdk_pid1879952 00:36:14.094 Clean 00:36:14.094 19:20:28 -- common/autotest_common.sh@1450 -- # return 0 00:36:14.094 19:20:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:14.094 19:20:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:14.094 19:20:28 -- common/autotest_common.sh@10 -- # set +x 00:36:14.355 19:20:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:14.355 19:20:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:14.355 19:20:28 -- common/autotest_common.sh@10 -- # set +x 00:36:14.355 19:20:28 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:36:14.355 19:20:28 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:36:14.355 19:20:28 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:36:14.355 19:20:28 -- spdk/autotest.sh@391 -- # hash lcov 00:36:14.355 19:20:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:14.355 19:20:28 -- spdk/autotest.sh@393 -- # hostname 00:36:14.355 19:20:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:36:14.646 geninfo: WARNING: invalid characters removed from testname! 00:36:41.193 19:20:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:43.726 19:20:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:46.259 19:21:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:48.166 19:21:02 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:50.698 19:21:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:53.230 19:21:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:55.764 19:21:09 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:55.764 19:21:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:36:55.764 19:21:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:55.764 19:21:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.764 19:21:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.764 19:21:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.764 19:21:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.764 19:21:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.764 19:21:10 -- paths/export.sh@5 -- $ export PATH 00:36:55.764 19:21:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.764 19:21:10 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:55.764 19:21:10 -- common/autobuild_common.sh@437 -- $ date +%s 00:36:55.764 19:21:10 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718040070.XXXXXX 00:36:55.764 19:21:10 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718040070.VtaWCL 00:36:55.764 19:21:10 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:36:55.764 19:21:10 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:36:55.764 19:21:10 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:36:55.764 19:21:10 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:55.764 19:21:10 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:55.764 19:21:10 -- common/autobuild_common.sh@453 -- $ get_config_params 00:36:55.764 19:21:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:36:55.764 19:21:10 -- common/autotest_common.sh@10 -- $ set +x 00:36:55.764 19:21:10 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:36:55.764 19:21:10 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:36:55.764 19:21:10 -- pm/common@17 -- $ local monitor 00:36:55.764 19:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:55.764 19:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:55.764 19:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:55.764 19:21:10 -- pm/common@21 -- $ date +%s 00:36:55.764 19:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:55.764 19:21:10 -- pm/common@21 -- $ date +%s 00:36:55.764 19:21:10 -- pm/common@25 -- $ sleep 1 00:36:55.764 19:21:10 -- pm/common@21 -- $ date +%s 00:36:55.764 19:21:10 -- pm/common@21 -- $ date +%s 00:36:55.764 19:21:10 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718040070 00:36:55.764 19:21:10 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718040070 00:36:55.764 19:21:10 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718040070 00:36:55.764 19:21:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718040070 00:36:55.764 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718040070_collect-cpu-load.pm.log 00:36:55.764 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718040070_collect-vmstat.pm.log 00:36:55.764 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718040070_collect-cpu-temp.pm.log 00:36:55.764 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718040070_collect-bmc-pm.bmc.pm.log 00:36:56.699 19:21:11 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:36:56.699 19:21:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:36:56.699 19:21:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:56.699 19:21:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:56.699 19:21:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:56.699 19:21:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:56.699 19:21:11 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:56.699 19:21:11 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:56.699 19:21:11 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:36:56.699 19:21:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:56.699 19:21:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:56.699 19:21:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:56.699 19:21:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:56.699 19:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.699 19:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:56.699 19:21:11 -- pm/common@44 -- $ pid=1896473 00:36:56.699 19:21:11 -- pm/common@50 -- $ kill -TERM 1896473 00:36:56.699 19:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.699 19:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:56.699 19:21:11 -- pm/common@44 -- $ pid=1896475 00:36:56.699 19:21:11 -- pm/common@50 -- $ kill -TERM 1896475 00:36:56.700 19:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.700 19:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:56.700 19:21:11 -- pm/common@44 -- $ pid=1896477 00:36:56.700 19:21:11 -- pm/common@50 -- $ kill -TERM 1896477 00:36:56.700 19:21:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.700 19:21:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:56.700 19:21:11 -- pm/common@44 -- $ pid=1896501 00:36:56.700 19:21:11 -- pm/common@50 -- $ sudo -E kill -TERM 1896501 00:36:56.700 + [[ -n 1433341 ]] 00:36:56.700 + sudo kill 1433341 00:36:56.709 [Pipeline] } 00:36:56.727 [Pipeline] // stage 00:36:56.732 [Pipeline] } 00:36:56.744 [Pipeline] // timeout 00:36:56.748 [Pipeline] } 00:36:56.761 [Pipeline] // catchError 00:36:56.767 [Pipeline] } 00:36:56.783 [Pipeline] // wrap 00:36:56.789 [Pipeline] } 00:36:56.805 [Pipeline] // catchError 00:36:56.815 [Pipeline] stage 00:36:56.817 [Pipeline] { (Epilogue) 00:36:56.831 [Pipeline] catchError 00:36:56.833 [Pipeline] { 00:36:56.848 [Pipeline] echo 00:36:56.849 Cleanup processes 00:36:56.855 [Pipeline] sh 00:36:57.137 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:57.137 1896574 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:36:57.137 1896921 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:57.152 [Pipeline] sh 00:36:57.433 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:57.433 ++ grep -v 'sudo pgrep' 00:36:57.433 ++ awk '{print $1}' 00:36:57.433 + sudo kill -9 1896574 00:36:57.445 [Pipeline] sh 00:36:57.726 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:57.726 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:37:05.876 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:37:11.208 [Pipeline] sh 00:37:11.490 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:11.490 Artifacts sizes are good 00:37:11.505 [Pipeline] archiveArtifacts 00:37:11.512 Archiving artifacts 00:37:11.659 [Pipeline] sh 00:37:11.942 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:37:11.958 [Pipeline] cleanWs 00:37:11.968 [WS-CLEANUP] Deleting project workspace... 00:37:11.968 [WS-CLEANUP] Deferred wipeout is used... 00:37:11.975 [WS-CLEANUP] done 00:37:11.977 [Pipeline] } 00:37:11.998 [Pipeline] // catchError 00:37:12.011 [Pipeline] sh 00:37:12.289 + logger -p user.info -t JENKINS-CI 00:37:12.295 [Pipeline] } 00:37:12.307 [Pipeline] // stage 00:37:12.311 [Pipeline] } 00:37:12.322 [Pipeline] // node 00:37:12.325 [Pipeline] End of Pipeline 00:37:12.351 Finished: SUCCESS